Resumen de la sesiĆ³n |
Tuesday, May 28 |
13:30 |
Implémentation d'un modèle d'apprentissage profond pour la détection d'anomalies dans les données bathymétriques, étude des performances et métriques
* Marceau Michel, Shom, France Thierry Schmitt, Shom, France La réalisation d’un projet ambitieux tel que Seabed2030 repose à la fois sur la capacité à mettre en œuvre des systèmes rapides et efficients de collectes de données bathymétriques et à celles de traiter ces données afin d’obtenir une information bathymétrie cohérente. Bien que les systèmes de mesure acoustique s’améliorent constamment, leurs mesures peuvent être contaminées d’erreurs qui peuvent être spécifiques à la plateforme et à son mouvement ou induites par l’environnement. Avec l’arrivée de moyens d’acquisition autonomes (Autonomous Underwater Vehicle, Unmanned Surface Vessel), le volume de données acquises va considérablement augmenter. Dans ce contexte, l'innovation dans le domaine des traitements apparait comme essentiel pour transformer cette abondance de données en connaissances exploitables. Dans cet objectif, le Shom (service hydrographique français) a initié avec la DGA (Direction Générale de l’Armement, ayant pour mission de préparer l'avenir des systèmes de défense) une étude sur la faisabilité d’approches d’intelligence artificielle (IA) appliquées à des données bathymétriques provenant de sondeurs multifaisceaux. Ces travaux, inspirés par une étude de l’UKHO [1], visent à évaluer la capacité à développer des modèles d’apprentissage profond souverains, par contraste avec des solutions proposées par les industriels telle que la méthode de Teledyne (MIRA [2]). Cette étude vise également à mieux appréhender ces nouvelles méthodes et à initier une réflexion sur l’analyse objective de leurs résultats. Les modèles U-Net développés avec comme entrée le ping (1D), des groupes de ping (2D) ou le nuage de points voxelisé (3D) seront présentés et leurs résultats détaillés. Au-delà des résultats, la présentation constituera un appel à l’élaboration d’une collection de jeux de données de référence (multi-organismes, multi-sites, multi-capteurs) qui permettra à tout intervenant académique ou industriel de proposer et d’intercomparer de nouvelles méthodes de traitement automatique. |
13:45 |
Identifying important features in a multibeam data set to support automatic classification of depth soundings
* Tony Furey, University of New Brunswick, Canada Ian Church, University of New Brunswick Machine Learning (ML) is a powerful tool used for automated classification problems and understanding the complex relationships in large, high-dimensional data sets. In hydrography, the multibeam echosounder (MBES) generates a large, high-dimensional data set with raw features such as position, attitude, beam number, across/along-track distances, reflectivity, quality, and two-way travel time or depth. A robust statistical and supervised classification analysis of manually edited MBES training data collected by an EM302 aboard the CCGS Amundsen reveals patterns to help detect problems in real-time, improve QA/QC, and identifies important features to support future ML and deep-learning (DL) applications. The emergence of uncrewed surface vessels (USVs) acts as a forced multiplier to MBES data collection, such that it’s no longer efficient to process data with the current manual editing and filtering approach. Changing the role of the hydrographer from manually editing data to verifying and assessing results requires automated cleaning techniques that harness the efficiency of ML, are scalable, and explainable. A common problem with ML is the “black box” phenomenon; however, in hydrography, explainability is important to ensure safety of navigation and the preservation of real features. Derivative features - such as rate of change of position and attitude – and supplementary data such as wind speed and direction are incorporated to help explain the data in terms of real-world events such as increased vessel speed or rough seas. Feature ranking and selection is achieved by combining the results of point biserial correlation, analysis of variance F-score, mutual information, and ReliefF. Pearson, Spearman, and Kendall correlation coefficients and corresponding covariance matrices enable the removal of highly correlated features. The results of feature ranking and selection are then evaluated by training and testing of five ML classification algorithms: decision tree, k-nearest neighbor, support vector machine, random forest, and extreme gradient boosting. |
14:00 |
Scalable, Cloud-based Bathymetric Processing
* Brian Calder, CCOM/JHC, University of New Hampshire, United States of America Brian Miles, CCOM/JHC, University of New Hampshire, United States of America Thomas Butkiewicz, CCOM/JHC, University of New Hampshire, United States of America Kindrat Beregovyi, CCOM/JHC, University of New Hampshire, United States of America Current generation bathymetric processing tools for hydrographic use are predominantly desktop-based, even if the data are stored in a network-connected server. Although massive investment in development and best practice over decades has led to very efficient, comprehensive tools, these systems are fundamentally limited by their implementing technology. That is, once you buy the hardware, it is a depreciating asset that cannot be efficiently expanded or readily scaled to dynamically adapt to current data processing needs as a survey is being worked. A cloud-based processing system has the potential to circumvent many of these limitations, particularly the issue of scalability; there are concomitant concerns, however, including data transfer costs and limitations on compute facilities that make this proposition nuanced (we ignore here security concerns, which are a separate issue, and relatively well understood from normal IT procedures). In this work, we therefore investigate the practicalities of using a “desktop in the cloud” approach (i.e., deploying desktop processing software in the cloud, which is operated remotely) as a transitionary scheme, but demonstrate that neither the performance nor financial burden of this approach are generally feasible. We therefore describe and demonstrate an approach for bathymetric (and, by extension, other) processing in the cloud using scalable, cloud-native technologies, with custom adaptations of hydrographic processing software currently in use. We detail the design and implementation of the system, and demonstrate solutions to some of the many issues that have to be solved on the path to a practical, deployable system including spatiotemporal databases, decoupling of system components for better manageability, scalability of compute resources, reliability of cloud-based processing (e.g., fault tolerance and process management), and high-performance, real-time interactive visualization. We call this system CloudMap. |
14:15 |
Automated onboard data processing - an enabler of autonomy
* Ian Davies, EIVA Ole Kristensen, EIVA Sarafina Mcpherson Kimø, EIVA, Denmark During remote hydrographic surveys collecting large amounts of data, it is most often not possible to transmit full data sets to remote surveyors in real-time for quality control (QC). Further, for enabling fully autonomous surveys, one must ensure a high quality of real-time data for use in onboard, automated decision-making, such as re-tasking. In both these cases, tools for onboard data processing are essential in reducing the need to send data onshore for QC. The configurable workflow process automation tool, Workflow Manager, is designed to automate the steps a surveyor goes through when processing subsea data. It can be set up for operation onboard any vessel, including autonomous setups with AUVs or USVs through an onboard computer. This tool can be used for automating both data editing and visualisation steps, as well as for the dynamic mission planning required in autonomous survey setups. The workflow automation tool can feed high-quality, real-time input to navigation and mission planning software, for example leading to immediate resurvey of problem areas such as noise or holes in data, rather than requiring the vehicle to return later. Setups and coding practices used for automating workflows will be presented, along with use cases of Workflow Manager, both for autonomous and supervised remote surveys. Through onboard software integrating all setups of data collection, this solution enables optimal, safe autonomous surveys. |
14:30 |
Advancing efficiency: automation of paper chart production and web-based publication services in hydrographic offices
* Julien Barbeau, Teledyne Geospatial, Canada This paper investigates the integration of automation and web services to streamline paper chart production and publication workflows within hydrographic offices. In response to the decline in sales of paper charts and to answer the request to be S-100 production ready by 2026, hydrographic offices are challenged to modernize traditional cartographic processes while embracing the benefits of S-100. The study explores the implementation of automation technologies for the paper chart production cycle and its seamless integration with web services. Case studies are examined to highlight the advantages and challenges associated with automating various stages of the paper chart compilation and their publication through cloud services. Key areas of emphasis include the reduction of manual labor, elimination of repetitive tasks, and the optimization of data validation processes. Additionally, the research evaluates the impact of automation on production timelines and the overall accessibility of paper charts through web services. Considerations for data security, standardization, and interoperability are addressed to ensure a secure and standardized transition to automated workflows with web-based publication services. The paper aims to guide hydrographic offices in adopting a strategic approach to automation, fostering collaboration between cartographers, IT specialists, and maritime stakeholders. By presenting practical insights and lessons learned, this research contributes valuable information to the ongoing dialogue within the hydrographic community. It aims to facilitate informed decision-making and promote the adoption of efficient paper chart production and publication processes through the integration of automation and web services. |