Vue d'ensemble de la session |
Tuesday, May 28 |
15:30 |
Establishing a multibeam backscatter calibration site in proximity of Tampa Bay
* Stephan J. O'Brien, College of Marine Science, University of South Florida, United States of America Matthew Hommeyer, College of Marine Science, University of South Florida Chad Lembke, College of Marine Science, University of South Florida Alex Silverman, College of Marine Science, University of South Florida Steven A. Murawski, College of Marine Science, University of South Florida Recommendations provided by national hydrographic offices for hydrographic surveys require the acquisition of backscatter data in addition to bathymetric data collection. It is challenging to currently compare multibeam backscatter collected during separate surveys and with different sensors. Calibrated backscatter enables backscatter to be combined into a single mosaic when collected by different sensors, enhances substrate characterization, and supports monitoring temporal changes in the substrate. A desktop study was completed to select six stations with a minimum depth of 20m and variable surficial sediments in the proximity of Tampa Bay, Florida. Multibeam bathymetry and backscatter data were acquired at the six stations with a Reson T50 dual head multibeam echosounder operating at a frequency of 200 kHz. Angular dependence in the multibeam backscatter was removed using a moving average window and backscatter mosaics were generated for each site. The normalized backscatter was averaged in grid cells and subtracted from the non-normalized backscatter. The backscatter value corresponding to an angle of 45 degrees was added to the mosaic grid cells. The normalized backscatter analysis resulted in the selection of a station for the multibeam backscatter calibration site, which is homogenous, stable over time, and flat to reduce the influence of bottom slope on backscatter. The selected site will provide opportunities for multibeam backscatter calibrations to be completed using different multibeam systems at the beginning of a field season, or when there is a change of significant hardware/firmware components in the multibeam system. |
15:45 |
Remote data processing operations as an extension to remote operations: a Canadian case study
* Anthony Peach, Fugro Canada Corp., Canada Rodney Spurvey, Fugro Canada Corp., Canada Fugro Canada opened a Remote Operations Center (ROC) at the Fugro Canada office located in St. John’s, Newfoundland and Labrador in February 2023. Over the one-year period since it opened, it has proven to be a flexible resource used across multiple marine disciplines. A significant component of the ROC is a section dedicated to remote data processing. A recent geophysical survey performed in Newfoundland and Labrador on a small vessel of opportunity utilized the remote data processing elements of the ROC. Throughout the project, the data processing personnel located in the ROC managed the data collection, download, processing and quality control of the data, both through remote support tools onboard the vessel in real time and through data processed after its arrival onshore. This was performed on a 24/7 schedule to stay in step with the onboard operations. The primary benefits of this remote data processing center were found to be: • removal of processing personnel from the harsh offshore environment, lowering overall risk levels from a safety perspective, and lowering the demands on crew living space and the related logistics, • the use of a scalable onshore data processing resource, allowing more data processing resources and technical support to be made available if necessary to help meet client deliverable deadlines, and, • having the data moved onshore in a timely manner which lowers the risk of data loss. The project was successful with the client deliverables delivered on time with only positive outcomes resulting from the shore side processing through the ROC. Fugro intends to continue and enhance this remote processing function and other ROC functions going forward in Canada and elsewhere. |
16:00 |
A likelihood-based triangulation method for uncertainties assessment in through-water depth mapping
Mohamed Ali Ghannami, Université laval/Ensta Bretagne, France Sylvie Daniel, Université laval, Canada * Guillaume Sicot, Ensta Bretagne, France Isabelle Quidu, Ensta Bretagne, France The complexities of mapping shallow water bodies, particularly coastal areas, have long been a subject of study, yet a focused understanding of the uncertainties in Water Column Depth (WCD) remains notably lacking. Traditional methods, involving either radiometric or geometric analyses of spectral imagery, often struggle in optically complex waters or in areas where seabeds lack distinct features. In this study, we introduce a likelihood-based inference framework for robustly assessing uncertainties in WCD estimation, with a particular focus on geometric approaches. The uncertainties in WCD estimates are analyzed through stereo-photogrammetric triangulation employing a rigorous modeling of the geometric path of light through air and water mediums. This approach, compatible with various sensor types and applied in this study to pushbroom sensors, subtly paves the way for potential opportunities of combining radiometric and geometric analyses through the likelihood framework. We methodically employed Monte Carlo simulations to verify the efficacy of various likelihood statistical tests in capturing WCD uncertainty, specifically focusing on the 95% Confidence Intervals. These simulations, designed to reflect real-world scenarios, mimic pushbroom sensor acquisitions through both drone and plane platforms. Our findings indicate that both the Wald test and the likelihood ratio are effective in this context, with the likelihood ratio demonstrating enhanced robustness in complex scenarios characterized by high altitude noise, elevated flight altitudes, and cross-lines viewing geometries. A significant outcome of our study is the capability to accurately estimate the position of the water interface, explicitly modeled in our approach, along with its associated uncertainty. Our results particularly emphasize the enhanced accuracy in estimating the water surface position using pushbroom sensors with cross-line imaging, a finding that could inform and refine future standards in pushbroom sensor deployment. |
16:15 |
Investigating the use of point convolution surface reconstruction on multibeam sonar data
* Sara Shamaei, The University of New Brunswick, Canada Ian Church, The University of New Brunswick, Canada Kevin Wilcox, The University of New Brunswick, Canada In contemporary robotics, autonomous driving, and virtual/augmented reality applications, the prevalence of sensors (e.g. LiDAR) capable of directly capturing 3D data is rising. Computer-aided design is one of the many tools engineers and designers use to aid in the creation, modification, analysis, or optimization of intricate 3D shapes. As the utilization of machine learning rises, an alternative approach involves employing neural network algorithms to represent 3D data. A convolutional neural network (CNN) is a deep learning algorithm that has significantly improved results in various vision tasks by utilizing translation invariance, allowing the same convolutional filters to be applied across all locations in an image, reducing parameters and enhancing generalization. 3D data, however, often come in point clouds, a set of unordered 3D points, with or without additional features on each point. Unlike 2D images, point clouds are unordered and do not conform to regular grids. Applying conventional CNNs to such input is challenging; therefore, an alternative is to treat 3D space as a volumetric grid, but this leads to sparse volumes, making CNNs computationally impractical for high-resolution volumes. Researchers have developed methods for applying convolutional operations directly to point clouds to address this. Point convolutional operations involve adapting the convolutional idea to the irregular structure of point clouds. These methods typically consider local neighbourhoods around each point and use these neighbourhoods to compute features and learn patterns. We can use advancements in these disciplines to help represent complex seabed objects and infrastructures based on multibeam sonar point clouds. This presentation explores the possibilities of applying machine and deep learning models, particularly point convolution, to multibeam sonar data. The main goal is to assess the suitability of using point convolution for examining multibeam sonar data and explore its potential in enhancing the representation of seafloor features compared to traditional gridding methods. |