Huo, Guanying; Yang, Simon X; Li, Qingwu; Zhou, Yan
2017-04-01
Sidescan sonar image segmentation is a very important issue in underwater object detection and recognition. In this paper, a robust and fast method for sidescan sonar image segmentation is proposed, which deals with both speckle noise and intensity inhomogeneity that may cause considerable difficulties in image segmentation. The proposed method integrates the nonlocal means-based speckle filtering (NLMSF), coarse segmentation using k -means clustering, and fine segmentation using an improved region-scalable fitting (RSF) model. The NLMSF is used before the segmentation to effectively remove speckle noise while preserving meaningful details such as edges and fine features, which can make the segmentation easier and more accurate. After despeckling, a coarse segmentation is obtained by using k -means clustering, which can reduce the number of iterations. In the fine segmentation, to better deal with possible intensity inhomogeneity, an edge-driven constraint is combined with the RSF model, which can not only accelerate the convergence speed but also avoid trapping into local minima. The proposed method has been successfully applied to both noisy and inhomogeneous sonar images. Experimental and comparative results on real and synthetic sonar images demonstrate that the proposed method is robust against noise and intensity inhomogeneity, and is also fast and accurate.
Robust and fast-converging level set method for side-scan sonar image segmentation
NASA Astrophysics Data System (ADS)
Liu, Yan; Li, Qingwu; Huo, Guanying
2017-11-01
A robust and fast-converging level set method is proposed for side-scan sonar (SSS) image segmentation. First, the noise in each sonar image is removed using the adaptive nonlinear complex diffusion filter. Second, k-means clustering is used to obtain the initial presegmentation image from the denoised image, and then the distance maps of the initial contours are reinitialized to guarantee the accuracy of the numerical calculation used in the level set evolution. Finally, the satisfactory segmentation is achieved using a robust variational level set model, where the evolution control parameters are generated by the presegmentation. The proposed method is successfully applied to both synthetic image with speckle noise and real SSS images. Experimental results show that the proposed method needs much less iteration and therefore is much faster than the fuzzy local information c-means clustering method, the level set method using a gamma observation model, and the enhanced region-scalable fitting method. Moreover, the proposed method can usually obtain more accurate segmentation results compared with other methods.
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-01-01
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments. PMID:27455279
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-07-22
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments.
Hamill, Daniel; Buscombe, Daniel; Wheaton, Joseph M
2018-01-01
Side scan sonar in low-cost 'fishfinder' systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar.
Buscombe, Daniel; Wheaton, Joseph M.
2018-01-01
Side scan sonar in low-cost ‘fishfinder’ systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar. PMID:29538449
The fusion of large scale classified side-scan sonar image mosaics.
Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan
2006-07-01
This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.
A Q-Ising model application for linear-time image segmentation
NASA Astrophysics Data System (ADS)
Bentrem, Frank W.
2010-10-01
A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.
Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation.
Chen, Chao; Zare, Alina; Trinh, Huy N; Omotara, Gbenga O; Cobb, James Tory; Lagaunne, Timotius A
2017-12-01
Topic models [e.g., probabilistic latent semantic analysis, latent Dirichlet allocation (LDA), and supervised LDA] have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership LDA (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery, where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-13
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-898] Certain Marine Sonar Imaging Devices... importation of certain marine sonar imaging devices, products containing the same, and components thereof by... marine sonar imaging devices, products containing the same, and components thereof by reason of...
Object Classification in Semi Structured Enviroment Using Forward-Looking Sonar
dos Santos, Matheus; Ribeiro, Pedro Otávio; Núñez, Pedro; Botelho, Silvia
2017-01-01
The submarine exploration using robots has been increasing in recent years. The automation of tasks such as monitoring, inspection, and underwater maintenance requires the understanding of the robot’s environment. The object recognition in the scene is becoming a critical issue for these systems. On this work, an underwater object classification pipeline applied in acoustic images acquired by Forward-Looking Sonar (FLS) are studied. The object segmentation combines thresholding, connected pixels searching and peak of intensity analyzing techniques. The object descriptor extract intensity and geometric features of the detected objects. A comparison between the Support Vector Machine, K-Nearest Neighbors, and Random Trees classifiers are presented. An open-source tool was developed to annotate and classify the objects and evaluate their classification performance. The proposed method efficiently segments and classifies the structures in the scene using a real dataset acquired by an underwater vehicle in a harbor area. Experimental results demonstrate the robustness and accuracy of the method described in this paper. PMID:28961163
Technology Infusion of CodeSonar into the Space Network Ground Segment
NASA Technical Reports Server (NTRS)
Benson, Markland J.
2009-01-01
This slide presentation reviews the applicability of CodeSonar to the Space Network software. CodeSonar is a commercial off the shelf system that analyzes programs written in C, C++ or Ada for defects in the code. Software engineers use CodeSonar results as an input to the existing source code inspection process. The study is focused on large scale software developed using formal processes. The systems studied are mission critical in nature but some use commodity computer systems.
Audible sonar images generated with proprioception for target analysis.
Kuc, Roman B
2017-05-01
Some blind humans have demonstrated the ability to detect and classify objects with echolocation using palatal clicks. An audible-sonar robot mimics human click emissions, binaural hearing, and head movements to extract interaural time and level differences from target echoes. Targets of various complexity are examined by transverse displacements of the sonar and by target pose rotations that model movements performed by the blind. Controlled sonar movements executed by the robot provide data that model proprioception information available to blind humans for examining targets from various aspects. The audible sonar uses this sonar location and orientation information to form two-dimensional target images that are similar to medical diagnostic ultrasound tomograms. Simple targets, such as single round and square posts, produce distinguishable and recognizable images. More complex targets configured with several simple objects generate diffraction effects and multiple reflections that produce image artifacts. The presentation illustrates the capabilities and limitations of target classification from audible sonar images.
NASA Astrophysics Data System (ADS)
Meyer, J.; White, S.
2005-05-01
Classification of lava morphology on a regional scale contributes to the understanding of the distribution and extent of lava flows at a mid-ocean ridge. Seafloor classification is essential to understand the regional undersea environment at midocean ridges. In this study, the development of a classification scheme is found to identify and extract textural patterns of different lava morphologies along the East Pacific Rise using DSL-120 side-scan and ARGO camera imagery. Application of an accurate image classification technique to side-scan sonar allows us to expand upon the locally available visual ground reference data to make the first comprehensive regional maps of small-scale lava morphology present at a mid-ocean ridge. The submarine lava morphologies focused upon in this study; sheet flows, lobate flows, and pillow flows; have unique textures. Several algorithms were applied to the sonar backscatter intensity images to produce multiple textural image layers useful in distinguishing the different lava morphologies. The intensity and spatially enhanced images were then combined and applied to a hybrid classification technique. The hybrid classification involves two integrated classifiers, a rule-based expert system classifier and a machine learning classifier. The complementary capabilities of the two integrated classifiers provided a higher accuracy of regional seafloor classification compared to using either classifier alone. Once trained, the hybrid classifier can then be applied to classify neighboring images with relative ease. This classification technique has been used to map the lava morphology distribution and infer spatial variability of lava effusion rates along two segments of the East Pacific Rise, 17 deg S and 9 deg N. Future use of this technique may also be useful for attaining temporal information. Repeated documentation of morphology classification in this dynamic environment can be compared to detect regional seafloor change.
A novel underwater dam crack detection and classification approach based on sonar images
Shi, Pengfei; Fan, Xinnan; Ni, Jianjun; Khan, Zubair; Li, Min
2017-01-01
Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments. PMID:28640925
A novel underwater dam crack detection and classification approach based on sonar images.
Shi, Pengfei; Fan, Xinnan; Ni, Jianjun; Khan, Zubair; Li, Min
2017-01-01
Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments.
Multiresolution 3-D reconstruction from side-scan sonar images.
Coiras, Enrique; Petillot, Yvan; Lane, David M
2007-02-01
In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.
Grand Canyon riverbed sediment changes, experimental release of September 2000 - a sample data set
Wong, Florence L.; Anima, Roberto J.; Galanis, Peter; Codianne, Jennifer; Xia, Yu; Bucciarelli, Randy; Hamer, Michael
2003-01-01
An experimental water release from the Glen Canyon Dam into the Colorado River above Grand Canyon was conducted in September 2000 by the U.S. Bureau of Reclamation. The U.S. Geological Survey (USGS) conducted sidescan sonar surveys between Glen Canyon Dam (mile -15) and Diamond Creek (mile 220), Arizona (mile designations after Stevens, 1998) to determine the sediment characteristics of the Colorado River bed before and after the release. The first survey (R3-00-GC, 28 Aug to 5 Sep 2000) was conducted before the release when the river was at its Low Summer Steady Flow (LSSF) of 8,000 cfs. The second survey (R4-00-GC, 10 to 18 Sep 2000) was conducted immediately after the September 2000 experimental release when the average daily flow was as high as 30,800 cfs as measured below Glen Canyon Dam (Figure 2). Riverbed sediment properties interpreted from the sidescan sonar images include sediment type and sandwaves; overall changes in these properties between the two surveys were calculated. Sidescan sonar data from the USGS surveys were processed for segments of the Colorado River from Glen Canyon Dam (mile -15) to Phantom Ranch (mile 87.7, Figure 3). The surveys targeted pools between rapids that are part of the Grand Canyon Monitoring and Research Center (GCMRC http://www.gcmrc.gov/) physical sciences study. Maps interpreted from the sidescan sonar images show the distribution of sediment types (bedrock, boulders, pebbles or cobbles, and sand) and the extent of sandwaves for each of the pre- and post-flow surveys. The changes between the two surveys were calculated with spatial arithmetric and had properties of fining, coarsening, erosion, deposition, and the appearance or disappearance of sandwaves.
Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †.
Lee, Yeongjun; Choi, Jinwoo; Ko, Nak Yong; Choi, Hyun-Taek
2017-08-24
This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status-i.e., the existence and identity (or name)-of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods-particle filtering and Bayesian feature estimation-are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented.
Sonar target enhancement by shrinkage of incoherent wavelet coefficients.
Hunter, Alan J; van Vossen, Robbert
2014-01-01
Background reverberation can obscure useful features of the target echo response in broadband low-frequency sonar images, adversely affecting detection and classification performance. This paper describes a resolution and phase-preserving means of separating the target response from the background reverberation noise using a coherence-based wavelet shrinkage method proposed recently for de-noising magnetic resonance images. The algorithm weights the image wavelet coefficients in proportion to their coherence between different looks under the assumption that the target response is more coherent than the background. The algorithm is demonstrated successfully on experimental synthetic aperture sonar data from a broadband low-frequency sonar developed for buried object detection.
Processing of SeaMARC swath sonar imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratson, L.; Malinverno, A.; Edwards, M.
1990-05-01
Side-scan swath sonar systems have become an increasingly important means of mapping the sea floor. Two such systems are the deep-towed, high-resolution SeaMARC I sonar, which has a variable swath width of up to 5 km, and the shallow-towed, lower-resolution SeaMARC II sonar, which has a swath width of 10 km. The sea-floor imagery of acoustic backscatter output by the SeaMARC sonars is analogous to aerial photographs and airborne side-looking radar images of continental topography. Geologic interpretation of the sea-floor imagery is greatly facilitated by image processing. Image processing of the digital backscatter data involves removal of noise by medianmore » filtering, spatial filtering to remove sonar scans of anomalous intensity, across-track corrections to remove beam patterns caused by nonuniform response of the sonar transducers to changes in incident angle, and contrast enhancement by histogram equalization to maximize the available dynamic range. Correct geologic interpretation requires submarine structural fabrics to be displayed in their proper locations and orientations. Geographic projection of sea-floor imagery is achieved by merging the enhanced imagery with the sonar vehicle navigation and correcting for vehicle attitude. Co-registration of bathymetry with sonar imagery introduces sea-floor relief and permits the imagery to be displayed in three-dimensional perspectives, furthering the ability of the marine geologist to infer the processes shaping formerly hidden subsea terrains.« less
Minehunting sonar system research and development
NASA Astrophysics Data System (ADS)
Ferguson, Brian
2002-05-01
Sea mines have the potential to threaten the freedom of the seas by disrupting maritime trade and restricting the freedom of maneuver of navies. The acoustic detection, localization, and classification of sea mines involves a sequence of operations starting with the transmission of a sonar pulse and ending with an operator interpreting the information on a sonar display. A recent improvement to the process stems from the application of neural networks to the computed aided detection of sea mines. The advent of ultrawideband sonar transducers together with pulse compression techniques offers a thousandfold increase in the bandwidth-time product of conventional minehunting sonar transmissions enabling stealth mines to be detected at longer ranges. These wideband signals also enable mines to be imaged at safe standoff distances by applying tomographic image reconstruction techniques. The coupling of wideband transducer technology with synthetic aperture processing enhances the resolution of side scan sonars in both the cross-track and along-track directions. The principles on which conventional and advanced minehunting sonars are based are reviewed and the results of applying novel sonar signal processing algorithms to high-frequency sonar data collected in Australian waters are presented.
Distant touch hydrodynamic imaging with an artificial lateral line.
Yang, Yingchen; Chen, Jack; Engel, Jonathan; Pandya, Saunvit; Chen, Nannan; Tucker, Craig; Coombs, Sheryl; Jones, Douglas L; Liu, Chang
2006-12-12
Nearly all underwater vehicles and surface ships today use sonar and vision for imaging and navigation. However, sonar and vision systems face various limitations, e.g., sonar blind zones, dark or murky environments, etc. Evolved over millions of years, fish use the lateral line, a distributed linear array of flow sensing organs, for underwater hydrodynamic imaging and information extraction. We demonstrate here a proof-of-concept artificial lateral line system. It enables a distant touch hydrodynamic imaging capability to critically augment sonar and vision systems. We show that the artificial lateral line can successfully perform dipole source localization and hydrodynamic wake detection. The development of the artificial lateral line is aimed at fundamentally enhancing human ability to detect, navigate, and survive in the underwater environment.
Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †
Choi, Jinwoo; Choi, Hyun-Taek
2017-01-01
This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status—i.e., the existence and identity (or name)—of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods—particle filtering and Bayesian feature estimation—are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented. PMID:28837068
Side-scan sonar imaging of the Colorado River, Grand Canyon
Anima, Roberto; Wong, Florence L.; Hogg, David; Galanis, Peter
2007-01-01
This paper presents data collection methods and side-scan sonar data collected along the Colorado River in Grand Canyon in August and September of 2000. The purpose of the data collection effort was to image the distribution of sand between Glen Canyon Dam and river mile 87.4 before and after the 31,600 cfs flow of September 6-8. The side-scan sonar imaging focused on pools between rapids but included smaller rapids where possible.
Acoustic Facies Analysis of Side-Scan Sonar Data
NASA Astrophysics Data System (ADS)
Dwan, Fa Shu
Acoustic facies analysis methods have allowed the generation of system-independent values for the quantitative seafloor acoustic parameter, backscattering strength, from GLORIA and (TAMU) ^2 side-scan sonar data. The resulting acoustic facies parameters enable quantitative comparisons of data collected by different sonar systems, data from different environments, and measurements made with survey geometries. Backscattering strength values were extracted from the sonar amplitude data by inversion based on the sonar equation. Image processing products reveal seafloor features and patterns of relative intensity. To quantitatively compare data collected at different times or by different systems, and to ground truth-measurements and geoacoustic models, quantitative corrections must be made on any given data set for system source level, beam pattern, time-varying gain, processing gain, transmission loss, absorption, insonified area contribution, and grazing angle effects. In the sonar equation, backscattering strength is the sonar parameter which is directly related to seafloor properties. The GLORIA data used in this study are from the edge of a distal lobe of the Monterey Fan. An interfingered region of strong and weak seafloor signal returns from a flat seafloor region provides an ideal data set for this study. Inversion of imagery data from the region allows the quantitative definition of different acoustic facies. The (TAMU) ^2 data used are from a calibration site near the Green Canyon area of the Gulf of Mexico. Acoustic facies analysis techniques were implemented to generate statistical information for acoustic facies based on the estimates of backscattering strength. The backscattering strength values have been compared with Lambert's Law and other functions to parameterize the description of the acoustic facies. The resulting Lambertian constant values range from -26 dB to -36 dB. A modified Lambert relationship, which consists of both intercept and slope terms, appears to represent the BSS versus grazing angle profiles better based on chi^2 testing and error ellipse generation. Different regression functions, composed of trigonometric functions, were analyzed for different segments of the BSS profiles. A cotangent or sine/cosine function shows promising results for representing the entire grazing angle span of the BSS profiles.
Fusion of Local Statistical Parameters for Buried Underwater Mine Detection in Sonar Imaging
NASA Astrophysics Data System (ADS)
Maussang, F.; Rombaut, M.; Chanussot, J.; Hétet, A.; Amate, M.
2008-12-01
Detection of buried underwater objects, and especially mines, is a current crucial strategic task. Images provided by sonar systems allowing to penetrate in the sea floor, such as the synthetic aperture sonars (SASs), are of great interest for the detection and classification of such objects. However, the signal-to-noise ratio is fairly low and advanced information processing is required for a correct and reliable detection of the echoes generated by the objects. The detection method proposed in this paper is based on a data-fusion architecture using the belief theory. The input data of this architecture are local statistical characteristics extracted from SAS data corresponding to the first-, second-, third-, and fourth-order statistical properties of the sonar images, respectively. The interest of these parameters is derived from a statistical model of the sonar data. Numerical criteria are also proposed to estimate the detection performances and to validate the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holcomb, R.T.; Moore, J.G.; Lipman, P.W.
The GLORIA long-range sonar imaging system has revealed fields of large lava flows in the Hawaiian Trough east and south of Hawaii in water as deep as 5.5 km. Flows in the most extensive field (110 km long) have erupted from the deep submarine segment of Kilauea's east rift zone. Other flows have been erupted from Loihi and Mauna Loa. This discovery confirms a suspicion, long held from subaerial studies, that voluminous submarine flows are erupted from Hawaiian volcanoes, and it supports an inference that summit calderas repeatedly collapse and fill at intervals of centuries to millenia owing to voluminousmore » eruptions. These extensive flows differ greatly in form from pillow lavas found previously along shallower segments of the rift zones; therefore, revision of concepts of volcano stratigraphy and structure may be required.« less
Increasing circular synthetic aperture sonar resolution via adapted wave atoms deconvolution.
Pailhas, Yan; Petillot, Yvan; Mulgrew, Bernard
2017-04-01
Circular Synthetic Aperture Sonar (CSAS) processing computes coherently Synthetic Aperture Sonar (SAS) data acquired along a circular trajectory. This approach has a number of advantages, in particular it maximises the aperture length of a SAS system, producing very high resolution sonar images. CSAS image reconstruction using back-projection algorithms, however, introduces a dissymmetry in the impulse response, as the imaged point moves away from the centre of the acquisition circle. This paper proposes a sampling scheme for the CSAS image reconstruction which allows every point, within the full field of view of the system, to be considered as the centre of a virtual CSAS acquisition scheme. As a direct consequence of using the proposed resampling scheme, the point spread function (PSF) is uniform for the full CSAS image. Closed form solutions for the CSAS PSF are derived analytically, both in the image and the Fourier domain. The thorough knowledge of the PSF leads naturally to the proposed adapted atom waves basis for CSAS image decomposition. The atom wave deconvolution is successfully applied to simulated data, increasing the image resolution by reducing the PSF energy leakage.
Fractal analysis of seafloor textures for target detection in synthetic aperture sonar imagery
NASA Astrophysics Data System (ADS)
Nabelek, T.; Keller, J.; Galusha, A.; Zare, A.
2018-04-01
Fractal analysis of an image is a mathematical approach to generate surface related features from an image or image tile that can be applied to image segmentation and to object recognition. In undersea target countermeasures, the targets of interest can appear as anomalies in a variety of contexts, visually different textures on the seafloor. In this paper, we evaluate the use of fractal dimension as a primary feature and related characteristics as secondary features to be extracted from synthetic aperture sonar (SAS) imagery for the purpose of target detection. We develop three separate methods for computing fractal dimension. Tiles with targets are compared to others from the same background textures without targets. The different fractal dimension feature methods are tested with respect to how well they can be used to detect targets vs. false alarms within the same contexts. These features are evaluated for utility using a set of image tiles extracted from a SAS data set generated by the U.S. Navy in conjunction with the Office of Naval Research. We find that all three methods perform well in the classification task, with a fractional Brownian motion model performing the best among the individual methods. We also find that the secondary features are just as useful, if not more so, in classifying false alarms vs. targets. The best classification accuracy overall, in our experimentation, is found when the features from all three methods are combined into a single feature vector.
Phase unwrapping using region-based markov random field model.
Dong, Ying; Ji, Jim
2010-01-01
Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-26
... Certain Marine Sonar Imaging Devices, Products Containing the Same, and Components Thereof, DN 2981; the... United States after importation of certain marine sonar imaging devices, products containing the same...
Shallow water benthic imaging and substrate characterization using recreational-grade sidescan-sonar
Buscombe, Daniel D.
2017-01-01
In recent years, lightweight, inexpensive, vessel-mounted ‘recreational grade’ sonar systems have rapidly grown in popularity among aquatic scientists, for swath imaging of benthic substrates. To promote an ongoing ‘democratization’ of acoustical imaging of shallow water environments, methods to carry out geometric and radiometric correction and georectification of sonar echograms are presented, based on simplified models for sonar-target geometry and acoustic backscattering and attenuation in shallow water. Procedures are described for automated removal of the acoustic shadows, identification of bed-water interface for situations when the water is too turbid or turbulent for reliable depth echosounding, and for automated bed substrate classification based on singlebeam full-waveform analysis. These methods are encoded in an open-source and freely-available software package, which should further facilitate use of recreational-grade sidescan sonar, in a fully automated and objective manner. The sequential correction, mapping, and analysis steps are demonstrated using a data set from a shallow freshwater environment.
A Fisheries Application of a Dual-Frequency Identification Sonar Acoustic Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moursund, Russell A.; Carlson, Thomas J.; Peters, Rock D.
2003-06-01
The uses of an acoustic camera in fish passage research at hydropower facilities are being explored by the U.S. Army Corps of Engineers. The Dual-Frequency Identification Sonar (DIDSON) is a high-resolution imaging sonar that obtains near video-quality images for the identification of objects underwater. Developed originally for the Navy by the University of Washington?s Applied Physics Laboratory, it bridges the gap between existing fisheries assessment sonar and optical systems. Traditional fisheries assessment sonars detect targets at long ranges but cannot record the shape of targets. The images within 12 m of this acoustic camera are so clear that one canmore » see fish undulating as they swim and can tell the head from the tail in otherwise zero-visibility water. In the 1.8 MHz high-frequency mode, this system is composed of 96 beams over a 29-degree field of view. This high resolution and a fast frame rate allow the acoustic camera to produce near video-quality images of objects through time. This technology redefines many of the traditional limitations of sonar for fisheries and aquatic ecology. Images can be taken of fish in confined spaces, close to structural or surface boundaries, and in the presence of entrained air. The targets themselves can be visualized in real time. The DIDSON can be used where conventional underwater cameras would be limited in sampling range to < 1 m by low light levels and high turbidity, and where traditional sonar would be limited by the confined sample volume. Results of recent testing at The Dalles Dam, on the lower Columbia River in Oregon, USA, are shown.« less
Submarine Combat Systems Engineering Project Capstone Project
2011-06-06
sonar , imaging, Electronic Surveillance (ES) and communications. These sensors passively detect contacts, which emit... passive sensors is included. A Search Detect Identify Track Decide Engage Assess 3 contact can be sensed by the system as either surface or... Detect Track Avoid Search Detect Identify Track Search Engage Assess Detect Track Avoid Search • SONAR •Imagery •TC • SONAR • SONAR •EW •Imagery •ESM
Processing, mosaicking and management of the Monterey Bay digital sidescan-sonar images
Chavez, P.S.; Isbrecht, J.; Galanis, P.; Gabel, G.L.; Sides, S.C.; Soltesz, D.L.; Ross, Stephanie L.; Velasco, M.G.
2002-01-01
Sidescan-sonar imaging systems with digital capabilities have now been available for approximately 20 years. In this paper we present several of the various digital image processing techniques developed by the U.S. Geological Survey (USGS) and used to apply intensity/radiometric and geometric corrections, as well as enhance and digitally mosaic, sidescan-sonar images of the Monterey Bay region. New software run by a WWW server was designed and implemented to allow very large image data sets, such as the digital mosaic, to be easily viewed interactively, including the ability to roam throughout the digital mosaic at the web site in either compressed or full 1-m resolution. The processing is separated into the two different stages: preprocessing and information extraction. In the preprocessing stage, sensor-specific algorithms are applied to correct for both geometric and intensity/radiometric distortions introduced by the sensor. This is followed by digital mosaicking of the track-line strips into quadrangle format which can be used as input to either visual or digital image analysis and interpretation. An automatic seam removal procedure was used in combination with an interactive digital feathering/stenciling procedure to help minimize tone or seam matching problems between image strips from adjacent track-lines. The sidescan-sonar image processing package is part of the USGS Mini Image Processing System (MIPS) and has been designed to process data collected by any 'generic' digital sidescan-sonar imaging system. The USGS MIPS software, developed over the last 20 years as a public domain package, is available on the WWW at: http://terraweb.wr.usgs.gov/trs/software.html.
Shallow Water Imaging Sonar System for Environmental Surveying Final Report CRADA No. TC-1130-95
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, L. C.; Rosenbaum, H.
The scope of this research is to develop a shallow water sonar system designed to detect and map the location of objects such as hazardous wastes or discarded ordnance in coastal waters. The system will use high frequency wide-bandwidth imaging sonar, mounted on a moving platform towed behind a boat, to detect and identify objects on the sea bottom. Resolved images can be obtained even if the targets are buried in an overlayer of silt. Reference 1 ( also attached) summarized the statement of work and the scope of collaboration.
Wang, Xingmei; Liu, Shu; Liu, Zhipeng
2017-01-01
This paper proposes a combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm to detect underwater objects in sonar images. Specifically, for the first time, the problem of inappropriate filtering degree parameter which commonly occurs in non-local spatial information and seriously affects the denoising performance in sonar images, was solved with the method utilizing a novel filtering degree parameter. Then, a quantum-inspired shuffled frog leaping algorithm based on new search mechanism (QSFLA-NSM) is proposed to precisely and quickly detect sonar images. Each frog individual is directly encoded by real numbers, which can greatly simplify the evolution process of the quantum-inspired shuffled frog leaping algorithm (QSFLA). Meanwhile, a fitness function combining intra-class difference with inter-class difference is adopted to evaluate frog positions more accurately. On this basis, recurring to an analysis of the quantum-behaved particle swarm optimization (QPSO) and the shuffled frog leaping algorithm (SFLA), a new search mechanism is developed to improve the searching ability and detection accuracy. At the same time, the time complexity is further reduced. Finally, the results of comparative experiments using the original sonar images, the UCI data sets and the benchmark functions demonstrate the effectiveness and adaptability of the proposed method.
Liu, Zhipeng
2017-01-01
This paper proposes a combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm to detect underwater objects in sonar images. Specifically, for the first time, the problem of inappropriate filtering degree parameter which commonly occurs in non-local spatial information and seriously affects the denoising performance in sonar images, was solved with the method utilizing a novel filtering degree parameter. Then, a quantum-inspired shuffled frog leaping algorithm based on new search mechanism (QSFLA-NSM) is proposed to precisely and quickly detect sonar images. Each frog individual is directly encoded by real numbers, which can greatly simplify the evolution process of the quantum-inspired shuffled frog leaping algorithm (QSFLA). Meanwhile, a fitness function combining intra-class difference with inter-class difference is adopted to evaluate frog positions more accurately. On this basis, recurring to an analysis of the quantum-behaved particle swarm optimization (QPSO) and the shuffled frog leaping algorithm (SFLA), a new search mechanism is developed to improve the searching ability and detection accuracy. At the same time, the time complexity is further reduced. Finally, the results of comparative experiments using the original sonar images, the UCI data sets and the benchmark functions demonstrate the effectiveness and adaptability of the proposed method. PMID:28542266
The sonar aperture and its neural representation in bats.
Heinrich, Melina; Warmbold, Alexander; Hoffmann, Susanne; Firzlaff, Uwe; Wiegrebe, Lutz
2011-10-26
As opposed to visual imaging, biosonar imaging of spatial object properties represents a challenge for the auditory system because its sensory epithelium is not arranged along space axes. For echolocating bats, object width is encoded by the amplitude of its echo (echo intensity) but also by the naturally covarying spread of angles of incidence from which the echoes impinge on the bat's ears (sonar aperture). It is unclear whether bats use the echo intensity and/or the sonar aperture to estimate an object's width. We addressed this question in a combined psychophysical and electrophysiological approach. In three virtual-object playback experiments, bats of the species Phyllostomus discolor had to discriminate simple reflections of their own echolocation calls differing in echo intensity, sonar aperture, or both. Discrimination performance for objects with physically correct covariation of sonar aperture and echo intensity ("object width") did not differ from discrimination performances when only the sonar aperture was varied. Thus, the bats were able to detect changes in object width in the absence of intensity cues. The psychophysical results are reflected in the responses of a population of units in the auditory midbrain and cortex that responded strongest to echoes from objects with a specific sonar aperture, regardless of variations in echo intensity. Neurometric functions obtained from cortical units encoding the sonar aperture are sufficient to explain the behavioral performance of the bats. These current data show that the sonar aperture is a behaviorally relevant and reliably encoded cue for object size in bat sonar.
Aided target recognition processing of MUDSS sonar data
NASA Astrophysics Data System (ADS)
Lau, Brian; Chao, Tien-Hsin
1998-09-01
The Mobile Underwater Debris Survey System (MUDSS) is a collaborative effort by the Navy and the Jet Propulsion Lab to demonstrate multi-sensor, real-time, survey of underwater sites for ordnance and explosive waste (OEW). We describe the sonar processing algorithm, a novel target recognition algorithm incorporating wavelets, morphological image processing, expansion by Hermite polynomials, and neural networks. This algorithm has found all planted targets in MUDSS tests and has achieved spectacular success upon another Coastal Systems Station (CSS) sonar image database.
Powers, Jarrod; Brewer, Shannon K.; Long, James M.; Campbell, Thomas
2015-01-01
Side-scan sonar is a valuable tool for mapping habitat features in many aquatic systems suggesting it may also be useful for locating sedentary biota. The objective of this study was to determine if side-scan sonar could be used to identify freshwater mussel (unionid) beds and the required environmental conditions. We used side-scan sonar to develop a series of mussel-bed reference images by placing mussel shells within homogenous areas of fine and coarse substrates. We then used side-scan sonar to map a 32-km river reach during spring and summer. Using our mussel-bed reference images, several river locations were identified where mussel beds appeared to exist in the scanned images and we chose a subset of sites (n = 17) for field validation. The validation confirmed that ~60% of the sites had mussel beds and ~80% had some mussels or shells present. Water depth was significantly related to our ability to predict mussel-bed locations: predictive ability was greatest at depths of 1–2 m, but decreased in water >2-m deep. We determined side-scan sonar is an effective tool for preliminary assessments of mussel presence during times when they are located at or above the substrate surface and in relatively fine substrates excluding fine silt.
Uranga, Jon; Arrizabalaga, Haritz; Boyra, Guillermo; Hernandez, Maria Carmen; Goñi, Nicolas; Arregui, Igor; Fernandes, Jose A; Yurramendi, Yosu; Santiago, Josu
2017-01-01
This study presents a methodology for the automated analysis of commercial medium-range sonar signals for detecting presence/absence of bluefin tuna (Tunnus thynnus) in the Bay of Biscay. The approach uses image processing techniques to analyze sonar screenshots. For each sonar image we extracted measurable regions and analyzed their characteristics. Scientific data was used to classify each region into a class ("tuna" or "no-tuna") and build a dataset to train and evaluate classification models by using supervised learning. The methodology performed well when validated with commercial sonar screenshots, and has the potential to automatically analyze high volumes of data at a low cost. This represents a first milestone towards the development of acoustic, fishery-independent indices of abundance for bluefin tuna in the Bay of Biscay. Future research lines and additional alternatives to inform stock assessments are also discussed.
Uranga, Jon; Arrizabalaga, Haritz; Boyra, Guillermo; Hernandez, Maria Carmen; Goñi, Nicolas; Arregui, Igor; Fernandes, Jose A.; Yurramendi, Yosu; Santiago, Josu
2017-01-01
This study presents a methodology for the automated analysis of commercial medium-range sonar signals for detecting presence/absence of bluefin tuna (Tunnus thynnus) in the Bay of Biscay. The approach uses image processing techniques to analyze sonar screenshots. For each sonar image we extracted measurable regions and analyzed their characteristics. Scientific data was used to classify each region into a class (“tuna” or “no-tuna”) and build a dataset to train and evaluate classification models by using supervised learning. The methodology performed well when validated with commercial sonar screenshots, and has the potential to automatically analyze high volumes of data at a low cost. This represents a first milestone towards the development of acoustic, fishery-independent indices of abundance for bluefin tuna in the Bay of Biscay. Future research lines and additional alternatives to inform stock assessments are also discussed. PMID:28152032
A novel approach to surveying sturgeon using side-scan sonar and occupancy modeling
Flowers, H. Jared; Hightower, Joseph E.
2013-01-01
Technological advances represent opportunities to enhance and supplement traditional fisheries sampling approaches. One example with growing importance for fisheries research is hydroacoustic technologies such as side-scan sonar. Advantages of side-scan sonar over traditional techniques include the ability to sample large areas efficiently and the potential to survey fish without physical handling-important for species of conservation concern, such as endangered sturgeons. Our objectives were to design an efficient survey methodology for sampling Atlantic Sturgeon Acipenser oxyrinchus by using side-scan sonar and to developmethods for analyzing these data. In North Carolina and South Carolina, we surveyed six rivers thought to contain varying abundances of sturgeon by using a combination of side-scan sonar, telemetry, and video cameras (i.e., to sample jumping sturgeon). Lower reaches of each river near the saltwater-freshwater interface were surveyed on three occasions (generally successive days), and we used occupancy modeling to analyze these data.We were able to detect sturgeon in five of six rivers by using these methods. Side-scan sonar was effective in detecting sturgeon, with estimated gear-specific detection probabilities ranging from 0.2 to 0.5 and river-specific occupancy estimates (per 2-km river segment) ranging from 0.0 to 0.8. Future extensions of this occupancy modeling framework will involve the use of side-scan sonar data to assess sturgeon habitat and abundance in different river systems.
Reliability of fish size estimates obtained from multibeam imaging sonar
Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.
2013-01-01
Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄ = −8.34, SE = 2.39) and white perch (x̄ = 14.48, SE = 3.99) but not striped bass (x̄ = 3.71, SE = 2.58) or channel catfish (x̄ = 3.97, SE = 5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of bias are apparent when files are processed manually and can be filtered out when producing automated software estimates. Multibeam sonar estimates of fish size should be useful for research and management if these potential sources of bias and imprecision are addressed.
Enhanced echolocation via robust statistics and super-resolution of sonar images
NASA Astrophysics Data System (ADS)
Kim, Kio
Echolocation is a process in which an animal uses acoustic signals to exchange information with environments. In a recent study, Neretti et al. have shown that the use of robust statistics can significantly improve the resiliency of echolocation against noise and enhance its accuracy by suppressing the development of sidelobes in the processing of an echo signal. In this research, the use of robust statistics is extended to problems in underwater explorations. The dissertation consists of two parts. Part I describes how robust statistics can enhance the identification of target objects, which in this case are cylindrical containers filled with four different liquids. Particularly, this work employs a variation of an existing robust estimator called an L-estimator, which was first suggested by Koenker and Bassett. As pointed out by Au et al.; a 'highlight interval' is an important feature, and it is closely related with many other important features that are known to be crucial for dolphin echolocation. A varied L-estimator described in this text is used to enhance the detection of highlight intervals, which eventually leads to a successful classification of echo signals. Part II extends the problem into 2 dimensions. Thanks to the advances in material and computer technology, various sonar imaging modalities are available on the market. By registering acoustic images from such video sequences, one can extract more information on the region of interest. Computer vision and image processing allowed application of robust statistics to the acoustic images produced by forward looking sonar systems, such as Dual-frequency Identification Sonar and ProViewer. The first use of robust statistics for sonar image enhancement in this text is in image registration. Random Sampling Consensus (RANSAC) is widely used for image registration. The registration algorithm using RANSAC is optimized for sonar image registration, and the performance is studied. The second use of robust statistics is in fusing the images. It is shown that the maximum a posteriori fusion method can be formulated in a Kalman filter-like manner, and also that the resulting expression is identical to a W-estimator with a specific weight function.
Split Bregman's optimization method for image construction in compressive sensing
NASA Astrophysics Data System (ADS)
Skinner, D.; Foo, S.; Meyer-Bäse, A.
2014-05-01
The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.
The path to COVIS: A review of acoustic imaging of hydrothermal flow regimes
NASA Astrophysics Data System (ADS)
Bemis, Karen G.; Silver, Deborah; Xu, Guangyu; Light, Russ; Jackson, Darrell; Jones, Christopher; Ozer, Sedat; Liu, Li
2015-11-01
Acoustic imaging of hydrothermal flow regimes started with the incidental recognition of a plume on a routine sonar scan for obstacles in the path of the human-occupied submersible ALVIN. Developments in sonar engineering, acoustic data processing and scientific visualization have been combined to develop technology which can effectively capture the behavior of focused and diffuse hydrothermal discharge. This paper traces the development of these acoustic imaging techniques for hydrothermal flow regimes from their conception through to the development of the Cabled Observatory Vent Imaging Sonar (COVIS). COVIS has monitored such flow eight times a day for several years. Successful acoustic techniques for estimating plume entrainment, bending, vertical rise, volume flux, and heat flux are presented as is the state-of-the-art in diffuse flow detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damuth, J.E.; Flood, R.D.; Kowsmann, R.O.
1988-08-01
Imaging of the Amazon deep-sea fan with long-range side-scan sonar (GLORIA) has, for the first time, revealed the anatomy, trends, and growth pattern of distributary channels on this fan. Only one channel-levee system was active at any given time and extended from the Amazon Submarine Canyon downslope onto the lower fan (> 4,200 m). Formation of new channel-levee systems occurred when a currently active channel-levee system was cut off and abandoned through avulsion, and a new channel-levee system was established nearby. Through time, successive channel-levee formation and abandonment built two broad levee complexes consisting of groups of overlapping, coalescing segmentsmore » of channel-levee systems across the present fan surface. These, plus older, now buried levee complexes, indicate that fan growth is radially outward and downslope through development of successive levee complexes. The most striking characteristic of the distributary channels is their intricate, often recurving, meanders with sinuosities of up to 2.5. Cutoffs and abandoned meander loops indicate that the channels migrate laterally through time. Channel bifurcation results predominantly from avulsion when flows breach a channel levee, thereby abandoning the present channel and establishing a new channel-levee segment nearby. No clear evidence of channel branching (i.e., division of a single channel into two active segments) or braiding was observed. 22 figs.« less
Some Processing and Dynamic-Range Issues in Side-Scan Sonar Work
NASA Astrophysics Data System (ADS)
Asper, V. L.; Caruthers, J. W.
2007-05-01
Often side-scan sonar data are collected in such a way that they afford little opportunity to do more than simply display them as images. These images are often limited in dynamic range and stored only in an 8-bit tiff format of numbers representing less than true intensity values. Furthermore, there is little prior knowledge during a survey of the best range in which to set those eight bits. This can result in clipped strong targets and/or the depth of shadows so that the bits that can be recovered from the image are not fully representative of target or bottom backscatter strengths. Several top-of-the-line sonars do have a means of logging high-bit-rate digital data (sometimes only as an option), but only dedicated specialists pay much attention to such data, if they record them at all. Most users of side-scan sonars are interested only in the images. Discussed in this paper are issues related to storing and processing of high-bit-rate digital data to preserve their integrity for future enhanced, after- the-fact use and ability to recover actual backscatter strengths. This papers discusses issues in the use high-bit- rate, digital side-scan sonar data. This work was supported by the Office of Naval Research, Code 321OA, and the Naval Oceanographic Office, Mine Warfare Program.
Testing of a Composite Wavelet Filter to Enhance Automated Target Recognition in SONAR
NASA Technical Reports Server (NTRS)
Chiang, Jeffrey N.
2011-01-01
Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low resolution SONAR and camera videos taken from Unmanned Underwater Vehicles (UUVs). These SONAR images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both SONAR and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this report.
Shallow Water UXO Technology Demonstration Site, Scoring Record Number 2
2006-09-01
The Sound Metrics Corporation High frequency Imaging Sonar ( HFIS ) (fig. 4) dual frequency imaging sonar operates at 1.1 and 1.8 MHz. For this...the HFIS unit was determined using a National Marine Electronics Association (NMEA) GPRMC string from a Leica GPS system antenna mounted directly...above the HFIS instrument. This permits the image data to be integrated with the Multiple Frequency Sub-Bottom Profiler (MFSBP) and MGS data during
Tectonic evolution of Gorda Ridge inferred from sidescan sonar images
Masson, D.G.; Cacchione, D.A.; Drake, D.E.
1988-01-01
Gorda Ridge is the southern segment of the Juan de Fuca Ridge complex, in the north-east Pacific. Along-strike spreading-rate variation on Gorda Ridge and deformation of Gorda Plate are evidence for compression between the Pacific and Gorda Plates. GLORIA sidescan sonographs allow the spreading fabric associated with Gorda Ridge to be mapped in detail. Between 5 and 2 Ma, a pair of propagating rifts re-orientated the northern segment of Gorda Ridge by about 10?? clockwise, accommodating a clockwise shift in Pacific-Juan de Fuca plate motion that occurred around 5 Ma. Deformation of Gorda Plate, associated with southward decreasing spreading rates along southern Gorda Ridge, is accommodated by a combination of clockwise rotation of Gorda Plate crust, coupled with left-lateral motion on the original normal faults of the ocean crust. Segments of Gorda Plate which have rotated by different amounts are separated by narrow deformation zones across which sharp changes in ocean fabric trend are seen. Although minor lateral movement may occur on these NW to WNW structures, no major right-lateral movement, as predicted by previous models, is observed. ?? 1988 Kluwer Academic Publishers.
NASA Astrophysics Data System (ADS)
Lu, Y. W.; Liu, C. S.; Su, C. C.; Hsu, H. H.; Chen, Y. H.
2015-12-01
This study utilizes both chirp sonar images and coring results to investigate the unstable seafloor strata east of the Fangliao Submarine Canyon offshore southwestern Taiwan. We have constructed 3D chirp sonar images from a densely surveyed block to trace the attitude of an acoustic transparent layer and features caused by fluid activities. Based on the distribution of this transparent layer and fluid-related features, we suggest that this transparent layer forms a pathway for fluid migration which induces fluid-related characters such as acoustic blanking and fluid chimneys in the 3D chirp sonar images. Cored seafloor samples are used in this study to investigate the sediment compositions. The 210Pb activity profiles of the cores show oscillating and unsteady values at about 20~25 cm from core top. The bulk densities of the core samples in the same section (about 20~25 cm from core top) give values lower than those at deeper parts of the cores. These results indicate that the water content is much higher in the shallow sediments than in the deeper strata. From core sample analyses, we deduce that the local sediments are disturbed by liquefaction. From the analyses of 3D chirp sonar images and core data, we suggest that the seafloor east of the Fangliao Submarine Canyon is in an unstable condition, if disturbed by earthquakes, submarine landslides and gravity flows could be easily triggered and cause some geohazards, like breaking submarine cables during the 2006 Pingtung earthquake event.
Schramm, Chaim A; Sheng, Zizhang; Zhang, Zhenhai; Mascola, John R; Kwong, Peter D; Shapiro, Lawrence
2016-01-01
The rapid advance of massively parallel or next-generation sequencing technologies has made possible the characterization of B cell receptor repertoires in ever greater detail, and these developments have triggered a proliferation of software tools for processing and annotating these data. Of especial interest, however, is the capability to track the development of specific antibody lineages across time, which remains beyond the scope of most current programs. We have previously reported on the use of techniques such as inter- and intradonor analysis and CDR3 tracing to identify transcripts related to an antibody of interest. Here, we present Software for the Ontogenic aNalysis of Antibody Repertoires (SONAR), capable of automating both general repertoire analysis and specialized techniques for investigating specific lineages. SONAR annotates next-generation sequencing data, identifies transcripts in a lineage of interest, and tracks lineage development across multiple time points. SONAR also generates figures, such as identity-divergence plots and longitudinal phylogenetic "birthday" trees, and provides interfaces to other programs such as DNAML and BEAST. SONAR can be downloaded as a ready-to-run Docker image or manually installed on a local machine. In the latter case, it can also be configured to take advantage of a high-performance computing cluster for the most computationally intensive steps, if available. In summary, this software provides a useful new tool for the processing of large next-generation sequencing datasets and the ontogenic analysis of neutralizing antibody lineages. SONAR can be found at https://github.com/scharch/SONAR, and the Docker image can be obtained from https://hub.docker.com/r/scharch/sonar/.
Cochrane, Guy R.; Lafferty, Kevin D.
2002-01-01
Highly reflective seafloor features imaged by sidescan sonar in nearshore waters off the Northern Channel Islands (California, USA) have been observed in subsequent submersible dives to be areas of thin sand covering bedrock. Adjacent areas of rocky seafloor, suitable as habitat for endangered species of abalone and rockfish, and encrusting organisms, cannot be differentiated from the areas of thin sand on the basis of acoustic backscatter (i.e. grey level) alone. We found second-order textural analysis of sidescan sonar data useful to differentiate the bottom types where data is not degraded by near-range distortion (caused by slant-range and ground-range corrections), and where data is not degraded by far-range signal attenuation. Hand editing based on submersible observations is necessary to completely convert the sidescan sonar image to a bottom character classification map suitable for habitat mapping.
Terrain Aided Navigation for Remus Autonomous Underwater Vehicle
2014-06-01
22 Figure 11. Several successive sonar pings displayed together in the LTP frame .............23 Figure 12. The linear interpolation of...the sonar pings from Figure 11 .............................24 Figure 13. SIR particle filter algorithm, after [19... ping — |p k ky x .........46 Figure 26. Correlation probability distributions for four different sonar images ..............47 Figure 27. Particle
Zhao, Yuzheng; Wang, Aoxue; Zou, Yejun; Su, Ni; Loscalzo, Joseph; Yang, Yi
2016-08-01
NADH and its oxidized form NAD(+) have a central role in energy metabolism, and their concentrations are often considered to be among the most important readouts of metabolic state. Here, we present a detailed protocol to image and monitor NAD(+)/NADH redox state in living cells and in vivo using a highly responsive, genetically encoded fluorescent sensor known as SoNar (sensor of NAD(H) redox). The chimeric SoNar protein was initially developed by inserting circularly permuted yellow fluorescent protein (cpYFP) into the NADH-binding domain of Rex protein from Thermus aquaticus (T-Rex). It functions by binding to either NAD(+) or NADH, thus inducing protein conformational changes that affect its fluorescent properties. We first describe steps for how to establish SoNar-expressing cells, and then discuss how to use the system to quantify the intracellular redox state. This approach is sensitive, accurate, simple and able to report subtle perturbations of various pathways of energy metabolism in real time. We also detail the application of SoNar to high-throughput chemical screening of candidate compounds targeting cell metabolism in a microplate-reader-based assay, along with in vivo fluorescence imaging of tumor xenografts expressing SoNar in mice. Typically, the approximate time frame for fluorescence imaging of SoNar is 30 min for living cells and 60 min for living mice. For high-throughput chemical screening in a 384-well-plate assay, the whole procedure generally takes no longer than 60 min to assess the effects of 380 compounds on cell metabolism.
Sonar Imaging of Elastic Fluid-Filled Cylindrical Shells.
NASA Astrophysics Data System (ADS)
Dodd, Stirling Scott
1995-01-01
Previously a method of describing spherical acoustic waves in cylindrical coordinates was applied to the problem of point source scattering by an elastic infinite fluid -filled cylindrical shell (S. Dodd and C. Loeffler, J. Acoust. Soc. Am. 97, 3284(A) (1995)). This method is applied to numerically model monostatic oblique incidence scattering from a truncated cylinder by a narrow-beam high-frequency imaging sonar. The narrow beam solution results from integrating the point source solution over the spatial extent of a line source and line receiver. The cylinder truncation is treated by the method of images, and assumes that the reflection coefficient at the truncation is unity. The scattering form functions, calculated using this method, are applied as filters to a narrow bandwidth, high ka pulse to find the time domain scattering response. The time domain pulses are further processed and displayed in the form of a sonar image. These images compare favorably to experimentally obtained images (G. Kaduchak and C. Loeffler, J. Acoust. Soc. Am. 97, 3289(A) (1995)). The impact of the s_{ rm o} and a_{rm o} Lamb waves is vividly apparent in the images.
Description and Evaluation of a Four-Channel, Coherent 100-kHz Sidescan Sonar
2004-12-01
document contains color images. 14. ABSTRACT This report documents the design and features of a new, four-channel, coherent 100-kHz sidescan sonar...Atlantic Technical Memorandum DRDC Atlantic TM 2004-204 December 2004 Abstract This report documents the design and features of a new...Results This report documents the design and features of this new high-frequency sonar system. These initial field trial results demonstrate some of
Schramm, Chaim A.; Sheng, Zizhang; Zhang, Zhenhai; Mascola, John R.; Kwong, Peter D.; Shapiro, Lawrence
2016-01-01
The rapid advance of massively parallel or next-generation sequencing technologies has made possible the characterization of B cell receptor repertoires in ever greater detail, and these developments have triggered a proliferation of software tools for processing and annotating these data. Of especial interest, however, is the capability to track the development of specific antibody lineages across time, which remains beyond the scope of most current programs. We have previously reported on the use of techniques such as inter- and intradonor analysis and CDR3 tracing to identify transcripts related to an antibody of interest. Here, we present Software for the Ontogenic aNalysis of Antibody Repertoires (SONAR), capable of automating both general repertoire analysis and specialized techniques for investigating specific lineages. SONAR annotates next-generation sequencing data, identifies transcripts in a lineage of interest, and tracks lineage development across multiple time points. SONAR also generates figures, such as identity–divergence plots and longitudinal phylogenetic “birthday” trees, and provides interfaces to other programs such as DNAML and BEAST. SONAR can be downloaded as a ready-to-run Docker image or manually installed on a local machine. In the latter case, it can also be configured to take advantage of a high-performance computing cluster for the most computationally intensive steps, if available. In summary, this software provides a useful new tool for the processing of large next-generation sequencing datasets and the ontogenic analysis of neutralizing antibody lineages. SONAR can be found at https://github.com/scharch/SONAR, and the Docker image can be obtained from https://hub.docker.com/r/scharch/sonar/. PMID:27708645
Model-based approach to the detection and classification of mines in sidescan sonar.
Reed, Scott; Petillot, Yvan; Bell, Judith
2004-01-10
This paper presents a model-based approach to mine detection and classification by use of sidescan sonar. Advances in autonomous underwater vehicle technology have increased the interest in automatic target recognition systems in an effort to automate a process that is currently carried out by a human operator. Current automated systems generally require training and thus produce poor results when the test data set is different from the training set. This has led to research into unsupervised systems, which are able to cope with the large variability in conditions and terrains seen in sidescan imagery. The system presented in this paper first detects possible minelike objects using a Markov random field model, which operates well on noisy images, such as sidescan, and allows a priori information to be included through the use of priors. The highlight and shadow regions of the object are then extracted with a cooperating statistical snake, which assumes these regions are statistically separate from the background. Finally, a classification decision is made using Dempster-Shafer theory, where the extracted features are compared with synthetic realizations generated with a sidescan sonar simulator model. Results for the entire process are shown on real sidescan sonar data. Similarities between the sidescan sonar and synthetic aperture radar (SAR) imaging processes ensure that the approach outlined here could be made applied to SAR image analysis.
NASA Astrophysics Data System (ADS)
Gazagnaire, Julia; Cobb, J. T.; Isaacs, Jason
2015-05-01
There is a desire in the Mine Counter Measure community to develop a systematic method to predict and/or estimate the performance of Automatic Target Recognition (ATR) algorithms that are detecting and classifying mine-like objects within sonar data. Ideally, parameters exist that can be measured directly from the sonar data that correlate with ATR performance. In this effort, two metrics were analyzed for their predictive potential using high frequency synthetic aperture sonar (SAS) images. The first parameter is a measure of contrast. It is essentially the variance in pixel intensity over a fixed partition of relatively small size. An analysis was performed to determine the optimum block size for this contrast calculation. These blocks were then overlapped in the horizontal and vertical direction over the entire image. The second parameter is the one-dimensional K-shape parameter. The K-distribution is commonly used to describe sonar backscatter return from range cells that contain a finite number of scatterers. An Ada-Boosted Decision Tree classifier was used to calculate the probability of classification (Pc) and false alarm rate (FAR) for several types of targets in SAS images from three different data sets. ROC curves as a function of the measured parameters were generated and the correlation between the measured parameters in the vicinity of each of the contacts and the ATR performance was investigated. The contrast and K-shape parameters were considered separately. Additionally, the contrast and K-shape parameter were associated with background texture types using previously labeled high frequency SAS images.
NASA Astrophysics Data System (ADS)
Hamill, D. D.; Buscombe, D.; Wheaton, J. M.; Wilcock, P. R.
2016-12-01
The size and spatial organization of bed material, bed texture, is a fundamental physical attribute of lotic ecosystems. Traditional methods to map bed texture (such as physical samples and underwater video) are limited by low spatial coverage, and poor precision in positioning. Recreational grade sidescan sonar systems now offer the possibility of imaging submerged riverbed sediments at coverages and resolutions sufficient to identify subtle changes in bed texture, in any navigable body of water, with minimal cost, expertise in sonar, or logistical effort, thereby facilitating the democratization of acoustic imaging of benthic environments, to support ecohydrological studies in shallow water, not subject to the rigors of hydrographic standards, nor the preserve of hydroacoustic expertise and proprietary hydrographic industry software. We investigate the possibility of using recreational grade sidescan sonar for sedimentary change detection using a case study of repeat sidescan imaging of mixed sand-gravel-rock riverbeds in a debris-fan dominated canyon river, at a coverage and resolution that meets the objectives of studies of the effects of changing bed substrates on salmonid spawning. A repeat substrate mapping analysis on data collected between 2012 and 2015 on the Colorado River in Glen, Marble, and Grand Canyons will be presented. A detailed method has been developed to interpret and analyze non-survey-grade sidescan sonar data, encoded within an open source software tool developed by the authors. An automated technique to quantify bed texture directly from sidescan sonar imagery is tested against bed sediment observations from underwater video and multibeam sonar. Predictive relationships between known bed sediment observations and bed texture metrics could provide an objective means to quantify bed textures and to relate changes in bed texture to biological components of an aquatic ecosystem, at high temporal frequency, and with minimal logistical effort and cost.
Richter, Jacob T.; Sloss, Brian L.; Isermann, Daniel A.
2016-01-01
Previous research has generally ignored the potential effects of spawning habitat availability and quality on recruitment of Walleye Sander vitreus, largely because information on spawning habitat is lacking for many lakes. Furthermore, traditional transect-based methods used to describe habitat are time and labor intensive. Our objectives were to determine if side-scan sonar could be used to accurately classify Walleye spawning habitat in the nearshore littoral zone and provide lakewide estimates of spawning habitat availability similar to estimates obtained from a transect–quadrat-based method. Based on assessments completed on 16 northern Wisconsin lakes, interpretation of side-scan sonar images resulted in correct identification of substrate size-class for 93% (177 of 191) of selected locations and all incorrect classifications were within ± 1 class of the correct substrate size-class. Gravel, cobble, and rubble substrates were incorrectly identified from side-scan images in only two instances (1% misclassification), suggesting that side-scan sonar can be used to accurately identify preferred Walleye spawning substrates. Additionally, we detected no significant differences in estimates of lakewide littoral zone substrate compositions estimated using side-scan sonar and a traditional transect–quadrat-based method. Our results indicate that side-scan sonar offers a practical, accurate, and efficient technique for assessing substrate composition and quantifying potential Walleye spawning habitat in the nearshore littoral zone of north temperate lakes.
NASA Astrophysics Data System (ADS)
Kim, W. H.; Park, C.; Lee, M.; Park, H. Y.; Kim, C.
2015-12-01
A side scan sonar launches ultrasonic wave from both sides of the transducer. And it restores the image by receiving signals. It measures the strength of how "loud" the return echo is, and paints a picture. Hard areas of the sea floor like rocks reflect more return signal than softer areas like sand. We conducted seafloor image survey from 4, Mar. 2013 using R/V Jangmok2 (35ton), side scan sonar 4125 (Edge Tech corporation). The side scan sonar system (4125) is a dual frequency system of 400/900kHz. Seafloor image survey is commonly used to tow the sensor in the rear side of vessel. However, we fixed the tow-fish on right side of the vessel in the seawater with a long frame. The mounted side scan sonar survey was useful in shallow water like the port having many obstacles. And we conducted submarine topography using multi-beam echo sounder EM3001 (Kongs-berg corporation). Multi-beam echo sounder is a device for observing and recording the submarine topography using sound. We mounted the EM3001 on right side of the vessel. Multi-beam echo sounder transducer commonly to mount at right angles to the surface of water. However, we tilted 20-degrees of transducer for long range with 85-degrees measurement on the right side of the vessel. We were equipped with a motion sensor, DGPS(Differential Global Positioning System), and SV(Sound velocity) sensor for the vessel's motion compensation, vessel's position, and the velocity of sound of seawater. The surveys showed the sediment, waste materials, and a lot of discarded tires accumulated in the port. The maximum depth was 12m in the port. Such multi-beam echo sounder survey and side scan sonar survey will facilitate the management and the improvement of environment of port.
Near-real-time mosaics from high-resolution side-scan sonar
Danforth, William W.; O'Brien, Thomas F.; Schwab, W.C.
1991-01-01
High-resolution side-scan sonar has proven to be a very effective tool for stuyding and understanding the surficial geology of the seafloor. Since the mid-1970s, the US Geological Survey has used high-resolution side-scan sonar systems for mapping various areas of the continental shelf. However, two problems typically encountered included the short range and the high sampling rate of high-resolution side-scan sonar systems and the acquisition and real-time processing of the enormous volume of sonar data generated by high-resolution suystems. These problems were addressed and overcome in August 1989 when the USGS conducted a side-scan sonar and bottom sampling survey of a 1000-sq-km section of the continental shelf in the Gulf of Farallones located offshore of San Francisco. The primary goal of this survey was to map an area of critical interest for studying continental shelf sediment dynamics. This survey provided an opportunity to test an image processing scheme that enabled production of a side-scan sonar hard-copy mosaic during the cruise in near real-time.
Qualitative and quantitative processing of side-scan sonar data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dwan, F.S.; Anderson, A.L.; Hilde, T.W.C.
1990-06-01
Modern side-scan sonar systems allow vast areas of seafloor to be rapidly imaged and quantitatively mapped in detail. The application of remote sensing image processing techniques can be used to correct for various distortions inherent in raw sonography. Corrections are possible for water column, slant-range, aspect ratio, speckle and striping noise, multiple returns, power drop-off, and for georeferencing. The final products reveal seafloor features and patterns that are geometrically correct, georeferenced, and have improved signal/noise ratio. These products can be merged with other georeferenced data bases for further database management and information extraction. In order to compare data collected bymore » different systems from a common area and to ground truth measurements and geoacoustic models, quantitative correction must be made for calibrated sonar system and bathymetry effects. Such data inversion must account for system source level, beam pattern, time-varying gain, processing gain, transmission loss, absorption, insonified area, and grazing angle effects. Seafloor classification can then be performed on the calculated back-scattering strength using Lambert's Law and regression analysis. Examples are given using both approaches: image analysis and inversion of data based on the sonar equation.« less
Spawning behaviour of Allis shad Alosa alosa: new insights based on imaging sonar data.
Langkau, M C; Clavé, D; Schmidt, M B; Borcherding, J
2016-06-01
Spawning behaviour of Alosa alosa was observed by high resolution imaging sonar. Detected clouds of sexual products and micro bubbles served as a potential indicator of spawning activity. Peak spawning time was between 0130 and 0200 hours at night. Increasing detections over three consecutive nights were consistent with sounds of mating events (bulls) assessed in hearing surveys in parallel to the hydro acoustic detection. In 70% of the analysed mating events there were no additional A. alosa joining the event whilst 70% of the mating events showed one or two A. alosa leaving the cloud. In 31% of the analysed mating events, however, three or more A. alosa were leaving the clouds, indicating that matings are not restricted to a pair. Imaging sonar is suitable for monitoring spawning activity and behaviour of anadromous clupeids in their spawning habitats. © 2016 The Fisheries Society of the British Isles.
2014-12-19
used to evaluate the beacon performance at the Navy’s Seneca Lake Sonar Test Facility operated by NUWC-Newport. These tests occurred in the summer...prototype has been designed. Efforts have been underway to implement the spiral beacon into the Navy’s Sonar Simulation Toolset developed by Dr. Robert...mil). Digital Object Identifier 10.1109/JOE.2013.2293962 acoustic depth finding or sonar imaging may be compared with maps to coordinate position and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kendall, J.; Hams, J.E.; Buck, S.P.
1990-05-01
Advances in high resolution side-scan sonar imaging technology are so effective at imaging sea-floor geology that they have greatly improved the efficiency of a bottom sampling program The traditional sea-floor geology methodology of shooting a high-resolution seismic survey and sampling along the seismic grid was considered successful if outcrops were sampled on 20% of the attempts. A submersible was used sparingly because of the inability to consistently locate sea-floor outcrops. Side-scan sonar images have increased the sampling success ratio to 70-95% and allow the cost-effective use of a submersible even in areas of sparse sea-floor outcrops. In offshore basins thismore » new technology has been used in consolidated and semiconsolidated rock terranes. When combined with observations from a two-man submersible, SCUBA traverses, seismic data, and traditional sea-floor bottom sampling techniques, enough data are provided to develop an integrated sea-floor geologic interpretation. On individual prospects, side-scan sonar has aided the establishment of critical dip in poor seismic data areas, located seeps and tar mounds, and determined erosional breaching of a prospect. Over a mature producing field, side-scan sonar has influenced the search for field extension by documenting the orientation and location of critical trapping cross faults. These relatively inexpensive techniques can provide critical data in any marine basin where rocks crop out on the sea floor.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevenson, A.J.; Scholl, D.W.; Vallier, T.L.
1990-05-01
The central segment of the Aleutian Trench (162{degree}W to 175{degree}E) is an intraoceanic subduction zone that contains an anomalously thick sedimentary fill (4 km maximum). The fill is an arcward-thickening and slightly tilted wedge of sediment characterized acoustically by laterally continuous, closely spaced, parallel reflectors. These relations are indicative of turbidite deposition. The trench floor and reflection horizons are planar, showing no evidence of an axial channel or any transverse fan bodies. Cores of surface sediment recover turbidite layers, implying that sediment transport and deposition occur via diffuse, sheetlike, fine-grained turbidite flows that occupy the full width of the trench.more » The mineralogy of Holocene trench sediments document a mixture of island-arc (dominant) and continental source terranes. GLORIA side-scan sonar images reveal a westward-flowing axial trench channel that conducts sediment to the eastern margin of the central segment, where channelized flow cases. Much of the sediment transported in this channel is derived from glaciated drainages surrounding the Gulf of Alaska which empty into the eastern trench segment via deep-sea channel systems (Surveyor and others) and submarine canyons (Hinchinbrook and others). Insular sediment transport is more difficult to define. GLORIA images show the efficiency with which the actively growing accretionary wedge impounds sediment that manages to cross a broad fore-arc terrace. It is likely that island-arc sediment reaches the trench either directly via air fall, via recycling of the accretionary prism, or via overtopping of the accretionary ridges by the upper parts of thick turbidite flows.« less
Automatic Detection of Sand Ripple Features in Sidescan Sonar Imagery
2014-07-09
Among the features used in forensic scientific fingerprint analysis are terminations or bifurcations of print ridges. Sidescan sonar imagery of ripple...always be pathological cases. The size of the blocks of pixels used in determining the ripple wavelength is evident in the output images on the right in
Side-scan sonar mapping: Pseudo-real-time processing and mosaicking techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danforth, W.W.; Schwab, W.C.; O'Brien, T.F.
1990-05-01
The US Geological Survey (USGS) surveyed 1,000 km{sup 2} of the continental shelf off San Francisco during a 17-day cruise, using a 120-kHz side-scan sonar system, and produced a digitally processed sonar mosaic of the survey area. The data were processed and mosaicked in real time using software developed at the Lamont-Doherty Geological Observatory and modified by the USGS, a substantial task due to the enormous amount of data produced by high-resolution side-scan systems. Approximately 33 megabytes of data were acquired every 1.5 hr. The real-time sonar images were displayed on a PC-based workstation and the data were transferred tomore » a UNIX minicomputer where the sonar images were slant-range corrected, enhanced using an averaging method of desampling and a linear-contrast stretch, merged with navigation, geographically oriented at a user-selected scale, and finally output to a thermal printer. The hard-copy output was then used to construct a mosaic of the survey area. The final product of this technique is a UTM-projected map-mosaic of sea-floor backscatter variations, which could be used, for example, to locate appropriate sites for sediment sampling to ground truth the sonar imagery while still at sea. More importantly, reconnaissance surveys of this type allow for the analysis and interpretation of the mosaic during a cruise, thus greatly reducing the preparation time needed for planning follow-up studies of a particular area.« less
Foote, Kenneth G; Hanlon, Roger T; Lampietro, Pat J; Kvitek, Rikk G
2006-02-01
The squid Loligo opalescens is a key species in the nearshore pelagic community of California, supporting the most valuable state marine fishery, yet the stock biomass is unknown. In southern Monterey Bay, extensive beds occur on a flat, sandy bottom, water depths 20-60 m, thus sidescan sonar is a prima-facie candidate for use in rapid, synoptic, and noninvasive surveying. The present study describes development of an acoustic method to detect, identify, and quantify squid egg beds by means of high-frequency sidescan-sonar imagery. Verification of the method has been undertaken with a video camera carried on a remotely operated vehicle. It has been established that sidescan sonar images can be used to predict the presence or absence of squid egg beds. The lower size limit of detectability of an isolated egg bed is about 0.5 m with a 400-kHz sidescan sonar used with a 50-m range when towed at 3 knots. It is possible to estimate the abundance of eggs in a region of interest by computing the cumulative area covered by the egg beds according to the sidescan sonar image. In a selected quadrat one arc second on each side, the estimated number of eggs was 36.5 million.
Composite Wavelet Filters for Enhanced Automated Target Recognition
NASA Technical Reports Server (NTRS)
Chiang, Jeffrey N.; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin
2012-01-01
Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low-resolution sonar and camera videos taken from unmanned vehicles. These sonar images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both sonar and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this paper.
A retrospective on hydroacoustic assessment of fish passage in Alaskan rivers
NASA Astrophysics Data System (ADS)
Burwen, Debby; Fleischman, Steve; Maxwell, Suzanne; Pfisterer, Carl
2005-04-01
The Alaska Department of Fish and Game (ADFG) has enumerated fish stocks in rivers for over 30 years using a variety of acoustic technologies including single-, dual-, and split-beam sonar. Most recently, ADFG has evaluated a relatively new sonar technology at several sites in Alaska to determine its applicability to counting migrating fish in rivers. The new system, called a Dual frequency IDentification SONar (DIDSON), is a high-definition imaging sonar designed and manufactured by the University of Washington's Applied Physics Lab for military applications such as diver detection and underwater mine identification. Results from experiments conducted in 2002-2004 indicate that DIDSON provides significant improvements in our ability to detect, track, and determine the direction of travel of migrating fish in rivers. One of the most powerful uses of the DIDSON has been to combine its camera-like images of fish swimming behavior with corresponding split-beam data. These linked datasets have allowed us to evaluate the effects of fish orientation and swimming behavior on echo shape parameters that have proven useful in the classification of certain fish species.
Enhanced Sidescan-Sonar Imagery, North-Central Long Island Sound
McMullen, K.Y.; Poppe, L.J.; Schattgen, P.T.; Doran, E.F.
2008-01-01
The U.S. Geological Survey, National Oceanic and Atmospheric Administration (NOAA), and Connecticut Department of Environmental Protection have been working cooperatively to map the sea-floor geology within Long Island Sound. Sidescan-sonar imagery collected during three NOAA hydrographic surveys (H11043, H11044, and H11045) was used to interpret the surficial-sediment distribution and sedimentary environments within the Sound. The original sidescan-sonar imagery generated by NOAA was used to evaluate hazards to navigation, which does not require consistent tonal matching throughout the survey. In order to fully utilize these data for geologic interpretation, artifacts within the imagery, primarily due to sidescan-system settings (for example, gain changes), processing techniques (for example, lack of across-track normalization) and environmental noise (for example, sea state), need to be minimized. Sidescan-sonar imagery from surveys H11043, H11044, and H11045 in north-central Long Island Sound was enhanced by matching the grayscale tones between adjacent sidescan-sonar lines to decrease the patchwork effect caused by numerous artifacts and to provide a more coherent sidescan-sonar image for use in geologic interpretation.
Synthetic-Aperture Coherent Imaging From A Circular Path
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1995-01-01
Imaging algorithms based on exact point-target responses. Developed for use in reconstructing image of target from data gathered by radar, sonar, or other transmitting/receiving coherent-signal sensory apparatus following circular observation path around target. Potential applications include: Wide-beam synthetic-aperture radar (SAR) from aboard spacecraft in circular orbit around target planet; SAR from aboard airplane flying circular course at constant elevation around central ground point, toward which spotlight radar beam pointed; Ultrasonic reflection tomography in medical setting, using one transducer moving in circle around patient or else multiple transducers at fixed positions on circle around patient; and Sonar imaging of sea floor to high resolution, without need for large sensory apparatus.
Processing techniques for digital sonar images from GLORIA.
Chavez, P.S.
1986-01-01
Image processing techniques have been developed to handle data from one of the newest members of the remote sensing family of digital imaging systems. This paper discusses software to process data collected by the GLORIA (Geological Long Range Inclined Asdic) sonar imaging system, designed and built by the Institute of Oceanographic Sciences (IOS) in England, to correct for both geometric and radiometric distortions that exist in the original 'raw' data. Preprocessing algorithms that are GLORIA-specific include corrections for slant-range geometry, water column offset, aspect ratio distortion, changes in the ship's velocity, speckle noise, and shading problems caused by the power drop-off which occurs as a function of range.-from Author
Studying seafloor bedforms using autonomous stationary imaging and profiling sonars
Montgomery, Ellyn T.; Sherwood, Christopher R.
2014-01-01
The Sediment Transport Group at the U.S. Geological Survey, Woods Hole Coastal and Marine Science Center uses downward looking sonars deployed on seafloor tripods to assess and measure the formation and migration of bedforms. The sonars have been used in three resolution-testing experiments, and deployed autonomously to observe changes in the seafloor for up to two months in seven field experiments since 2002. The sonar data are recorded concurrently with measurements of waves and currents to: a) relate bedform geometry to sediment and flow characteristics; b) assess hydrodynamic drag caused by bedforms; and c) estimate bedform sediment transport rates, all with the goal of evaluating and improving numerical models of these processes. Our hardware, data processing methods, and test and validation procedures have evolved since 2001. We now employ a standard sonar configuration that provides reliable data for correlating flow conditions with bedform morphology. Plans for the future are to sample more rapidly and improve the precision of our tripod orientation measurements.
2000 Multibeam Sonar Survey of Crater Lake, Oregon - Data, GIS, Images, and Movies
Gardner, James V.; Dartnell, Peter
2001-01-01
In the summer of 2000, the U.S. Geological Survey, Pacific Seafloor Mapping Project in cooperation with the National Park Service, and the Center for Coastal and Ocean Mapping, University of New Hampshire used a state-of-the-art multibeam sonar system to collect high-resolution bathymetry and calibrated, co-registered acoustic backscatter to support both biological and geological research in the Crater Lake area. This interactive CD-ROM contains the multibeam bathymetry and acoustic backscatter data, along with an ESRI ArcExplorer project (and software), images, and movies.
Diver-based integrated navigation/sonar sensor
NASA Astrophysics Data System (ADS)
Lent, Keith H.
1999-07-01
Two diver based systems, the Small Object Locating Sonar (SOLS) and the Integrated Navigation and Sonar Sensor (INSS) have been developed at Applied Research Laboratories, the University of Texas at Austin (ARL:UT). They are small and easy to use systems that allow a diver to: detect, classify, and identify underwater objects; render large sector visual images; and track, map and reacquire diver location, diver path, and target locations. The INSS hardware consists of a unique, simple, single beam high resolution sonar, an acoustic navigation systems, an electronic depth gauge, compass, and GPS and RF interfaces, all integrated with a standard 486 based PC. These diver sonars have been evaluated by the very shallow water mine countermeasure detachment since spring 1997. Results are very positive, showing significantly greater capabilities than current diver held systems. For example, the detection ranges are increased over existing systems, and the system allows the divers to classify mines at a significant stand off range. As a result, the INSS design has been chosen for acquisition as the next generation diver navigation and sonar system. The EDMs for this system will be designed and built by ARL:UT during 1998 and 1999 with production planned in 2000.
2017-01-01
parameters on Wasque Shoals ..................................................... 22 Figure 19. Rotary sonar imagery showing migrating mega- ripples and the...shown by the green and yellow lines reveals the convergence and divergence of the migrating mega- ripples ...26 Figure 24. Succesive rotary sonar images showing transient burial and reexposure of a surrogate UXO by migrating mega- ripples
Geometric Corrections for Topographic Distortion from Side Scan Sonar Data Obtained by ANKOU System
NASA Astrophysics Data System (ADS)
Yamamoto, Fujio; Kato, Yukihiro; Ogasawara, Shohei
The ANKOU is a newly developed, full ocean depth, long-range vector side scan sonar system. The system provides real time vector side scan sonar data to produce backscattering images and bathymetric maps for seafloor swaths up to 10 km on either side of ship's centerline. Complete geometric corrections are made using towfish attitude and cross-track distortions known as foreshortening and layover caused by violation of the flat bottom assumption. Foreshortening and layover refers to pixels which have been placed at an incorrect cross-track distance. Our correction of this topographic distortion is accomplished by interpolating a bathymetric profile and ANKOU phase data. We applied these processing techniques to ANKOU backscattering data obtained from off Boso Peninsula, and confirmed their efficiency and utility for making geometric corrections of side scan sonar data.
High-performance, multi-faceted research sonar electronics
NASA Astrophysics Data System (ADS)
Moseley, Julian W.
This thesis describes the design, implementation and testing of a research sonar system capable of performing complex applications such as coherent Doppler measurement and synthetic aperture imaging. Specifically, this thesis presents an approach to improve the precision of the timing control and increase the signal-to-noise ratio of an existing research sonar. A dedicated timing control subsystem, and hardware drivers are designed to improve the efficiency of the old sonar's timing operations. A low noise preamplifier is designed to reduce the noise component in the received signal arriving at the input of the system's data acquisition board. Noise analysis, frequency response, and timing simulation data are generated in order to predict the functionality and performance improvements expected when the subsystems are implemented. Experimental data, gathered using these subsys- tems, are presented, and are shown to closely match the simulation results, thus verifying performance.
Processing and Analysis of Multibeam Sonar Data and Images near the Yellow River Estuary
NASA Astrophysics Data System (ADS)
Tang, Q.
2017-12-01
Yellow River Estuary is a typical high-suspended particulate matter estuary in the world. A lot of sediments from Yellow River and other substances produced by human activity cause high-concentration suspended matter and depositional system in the estuary and adjacent water area. Multibeam echo sounder (MBES) was developed in the 1970s, and it not only provided high-precision bathymetric data, but also provided seabed backscatter strength data and water column data with high temporal and spatial resolution. Here, based on high-precision sonar data of the seabed and water column collected by SeaBat7125 MBES system near the Yellow River Estuary, we use advanced data and image processing methods to generate seabed sonar images and water suspended particulate matter acoustic images. By analyzing these data and images, we get a lot of details of the seabed and whole water column features, and we also acquire their shape, size and basic physical characteristics of suspended particulate matters in the experiment area near the Yellow River Estuary. This study shows great potential for monitoring suspended particulate matter use MBES, and the research results will contribute to a comprehensive understanding of sediment transportation, evolution of river trough and shoal in Yellow River Estuary.
2015-12-31
image from NURP annual report. in X The ray -cone code simulates the CAS signal received after being reflected form two different targets, and...Cm where m, m, ... , 1fn are X ’s parents, and nodes C1, C1, ... , C,, are X ’s children. Image based on (Duda, Hart, & Stork, 2001). The first...Sorenson, 1970). Using the reference (Welch & Bishop, 2006), the procedure for estimating the real state x , of a discrete-time controlled process , will
Neyman Pearson detection of K-distributed random variables
NASA Astrophysics Data System (ADS)
Tucker, J. Derek; Azimi-Sadjadi, Mahmood R.
2010-04-01
In this paper a new detection method for sonar imagery is developed in K-distributed background clutter. The equation for the log-likelihood is derived and compared to the corresponding counterparts derived for the Gaussian and Rayleigh assumptions. Test results of the proposed method on a data set of synthetic underwater sonar images is also presented. This database contains images with targets of different shapes inserted into backgrounds generated using a correlated K-distributed model. Results illustrating the effectiveness of the K-distributed detector are presented in terms of probability of detection, false alarm, and correct classification rates for various bottom clutter scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahl, D.E.; Jakowatz, C.V. Jr.; Ghiglia, D.C.
1991-01-01
Autofocus methods in SAR and self-survey techniques in SONAR have a common mathematical basis in that they both involve estimation and correction of phase errors introduced by sensor position uncertainties. Time delay estimation and correlation methods have been shown to be effective in solving the self-survey problem for towed SONAR arrays. Since it can be shown that platform motion errors introduce similar time-delay estimation problems in SAR imaging, the question arises as to whether such techniques could be effectively employed for autofocus of SAR imagery. With a simple mathematical model for motion errors in SAR, we will show why suchmore » correlation/time-delay techniques are not nearly as effective as established SAR autofocus algorithms such as phase gradient autofocus or sub-aperture based methods. This analysis forms an important bridge between signal processing methodologies for SAR and SONAR. 5 refs., 4 figs.« less
Technology Infusion of CodeSonar into the Space Network Ground Segment (RII07)
NASA Technical Reports Server (NTRS)
Benson, Markland
2008-01-01
The NASA Software Assurance Research Program (in part) performs studies as to the feasibility of technologies for improving the safety, quality, reliability, cost, and performance of NASA software. This study considers the application of commercial automated source code analysis tools to mission critical ground software that is in the operations and sustainment portion of the product lifecycle.
Interferometric side scan sonar and data fusion
NASA Astrophysics Data System (ADS)
Sintes, Christophe R.; Solaiman, Basel
2000-04-01
This paper concerns the possibilities of sea bottom imaging and altitude determining of each imaged point. The performances of new side scan sonars which are able to image the sea bottom with a high definition and are able to evaluate the relief with the same definition derive from an interferometric multisensor system. The drawbacks concern the precision of the numerical altitude model. One way to improve the measurements precision is to merge all the information issued from the multi-sensors system. This leads to increase the Signal to Noise Ratio (SNR) and the robustness of the used method. The aim of this paper is to clearly demonstrate the ability to derive benefits of all information issued from the three arrays side scan sonar by merging: (1) the three phase signals obtained at the output of the sensors, (2) this same set of data after the application of different processing methods, and (3) the a priori relief contextual information. The key idea the proposed fusion technique is to exploit the strength and the weaknesses of each data element in the fusion of process so that the global SNR will be improved as well as the robustness to hostile noisy environments.
2016-01-01
satisfying journeys in my life. I would like to thank Ryan for his guidance through the truly exciting world of mobile robotics and robotic perception. Thank...Multi-session and Multi-robot SLAM . . . . . . . . . . . . . . . 15 1.3.3 Robust Techniques for SLAM Backends . . . . . . . . . . . . . . 18 1.4 A...sonar. xv CHAPTER 1 Introduction 1.1 The Importance of SLAM in Autonomous Robotics Autonomous mobile robots are becoming a promising aid in a wide
Shallow water imaging sonar system for environmental surveying. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-05-01
The scope of this research is to develop a shallow water sonar system designed to detect and map the location of objects such as hazardous wastes or discarded ordnance in coastal waters. The system will use high frequency wide-bandwidth imaging sonar, mounted on a moving platform towed behind a boat, to detect and identify objects on the sea bottom. Resolved images can be obtained even if the targets are buried in an overlayer of silt. The specific technical objective of this research was to develop and test a prototype system that is capable of (1) scan at high speeds (upmore » to 10m/s), even in shallow water (depth to ten meters), without motion blurring or loss of resolution; (2) produce images of the bottom structure that are detailed enough for unambiguous detection of objects as small as 15cm, even if they are buried up to 30cm deep in silt or sand. The critical technology involved uses an linear FM (LFM) or similar complex waveform, which has a high bandwidth for good range resolution, with a long pulse length for similar Dopper resolution. The lone duration signal deposits more energy on target than a narrower pulse, which increases the signal-to-noise ratio and signal-to-clutter ratio. This in turn allows the use of cheap, lightweight, low power, piezoelectric transducers at the 30--500 kHz range.« less
Geophysical study of the East Pacific Rise 15°N-17°N: An unusually robust segment
NASA Astrophysics Data System (ADS)
Weiland, Charles M.; MacDonald, Ken C.
1996-09-01
Bathymetric, side-scan sonar, magnetic and gravity data from the East Pacific Rise (EPR) between 15° and 17°N are used to establish the spreading history and examine melt delivery to an unusually robust spreading segment. The axial ridge between the Orozco transform fault (15°30'N) and the 16°20'N overlapping spreading center (OSC) has an average elevation of 2300 m which is 300 m shallower than typical EPR depths, and its cross-sectional area is double the average value for the northern EPR. The total opening rate is 86 km/Myr, but the inflated segment appears to have spread faster to the east by more than 20% since 0.78 Ma. The orientation of magnetic isochrons and lineaments in the side-scan sonar indicates a ˜3° counterclockwise rotation of the spreading direction since 1.8 Ma (C2) and reflects a change in the Pacific-Cocos plate motion. The side-scan lineaments also show that the percentage of inward facing faults (83%) and the spacing between faults (1.5 km) are consistent with the spreading rate dependence shown by Carbotte and Macdonald [1994]. However, the mean fault length (4.8 km) is 1.5 km shorter than expected for the spreading rate and suggests that extensive off-axis volcanism has draped the faults. Gravity analysis shows that the inflated segment has a ˜12-mGal bull's eye shaped low in residual mantle Bouguer anomaly. We offer several possible end-member models for the anomaly, including a prism of 10% partial melt in the mantle and lower crust or a crustal thickness anomaly of 2.25 km. Kinematic modeling that is based on structure and magnetic data suggests that two large magmatic pulses occurred at approximately 0.8 Ma and 0.3 Ma and have reshaped the plate boundary geometry and inflated the segment.
Wang, Xingmei; Hao, Wenqian; Li, Qiming
2017-12-18
This paper proposes an adaptive cultural algorithm with improved quantum-behaved particle swarm optimization (ACA-IQPSO) to detect the underwater sonar image. In the population space, to improve searching ability of particles, iterative times and the fitness value of particles are regarded as factors to adaptively adjust the contraction-expansion coefficient of the quantum-behaved particle swarm optimization algorithm (QPSO). The improved quantum-behaved particle swarm optimization algorithm (IQPSO) can make particles adjust their behaviours according to their quality. In the belief space, a new update strategy is adopted to update cultural individuals according to the idea of the update strategy in shuffled frog leaping algorithm (SFLA). Moreover, to enhance the utilization of information in the population space and belief space, accept function and influence function are redesigned in the new communication protocol. The experimental results show that ACA-IQPSO can obtain good clustering centres according to the grey distribution information of underwater sonar images, and accurately complete underwater objects detection. Compared with other algorithms, the proposed ACA-IQPSO has good effectiveness, excellent adaptability, a powerful searching ability and high convergence efficiency. Meanwhile, the experimental results of the benchmark functions can further demonstrate that the proposed ACA-IQPSO has better searching ability, convergence efficiency and stability.
NASA Astrophysics Data System (ADS)
Morris, Phillip A.
The prevalence of low-cost side scanning sonar systems mounted on small recreational vessels has created improved opportunities to identify and map submerged navigational hazards in freshwater impoundments. However, these economical sensors also present unique challenges for automated techniques. This research explores related literature in automated sonar imagery processing and mapping technology, proposes and implements a framework derived from these sources, and evaluates the approach with video collected from a recreational grade sonar system. Image analysis techniques including optical character recognition and an unsupervised computer automated detection (CAD) algorithm are employed to extract the transducer GPS coordinates and slant range distance of objects protruding from the lake bottom. The retrieved information is formatted for inclusion into a spatial mapping model. Specific attributes of the sonar sensors are modeled such that probability profiles may be projected onto a three dimensional gridded map. These profiles are computed from multiple points of view as sonar traces crisscross or come near each other. As lake levels fluctuate over time so do the elevation points of view. With each sonar record, the probability of a hazard existing at certain elevations at the respective grid points is updated with Bayesian mechanics. As reinforcing data is collected, the confidence of the map improves. Given a lake's current elevation and a vessel draft, a final generated map can identify areas of the lake that have a high probability of containing hazards that threaten navigation. The approach is implemented in C/C++ utilizing OpenCV, Tesseract OCR, and QGIS open source software and evaluated in a designated test area at Lake Lavon, Collin County, Texas.
Spatial and temporal variation of acoustic backscatter in the STRESS experiment
NASA Astrophysics Data System (ADS)
Dworski, J. George; Jackson, Darrell R.
1994-08-01
Acoustic backscatter measurements were made of the seabed with a bottom mounted, circularly scanning sonar. The placement was at 91 m depth, mid-shelf of Northern California (38° 34'N), site C3 of the experiment STRESS I (1988-1989). Our expectation was that sonar images (70 m radius, 12,000 m 2) would provide a means of observing, over a large field of view, changes in the bottom due to storm-induced sediment transport and due to bioturbation. This expectation was supported in part by towed sonar measurements at 35 kHz over a sandy area in the North Sea, where dramatic spatial variation in the level of the backseattered signal was observed during an Autumn storm on scales of a few km with no concomitant change in sediment grain size [ JACKSONet al. (1986) The Journal of the Acoustical Society of America, 80, 1188-1199]. It appeared possible that storm-driven sediment transport might have been responsible for this patchiness, by altering bottom roughness and by redeposition of suspended material. At the California site, a conventional sonar processing of our data from the STRESS experiment reveals no such dramatic change in backscattered signal level due to storms. The sonar images contain random structures whose time evolution is subtle and difficult to interpret. A much clearer picture of temporal and spatial variations emerges from a processing scheme involving cross-correlation of time-separated acoustic views of the bottom. In effect, the sequence of correlation data images produces a movie in which patches of activity are seen to develop as functions of time. It appears that most of this activity is biological rather than hydrodynamic. A tentative explanation is two-fold. The bottom shear stress might have been considerably greater at the North Sea site (with depth only one-half of the California site). The seafloor at the California site was silty-clayey, and backscatter from such floor is less sensitive to the water-floor interface shape and roughness than it would be to the same parameters of a sandy bottom.
New sidescan sonar and gravity evidence that the Nova-Canton Trough is a fracture zone
NASA Astrophysics Data System (ADS)
Joseph, Devorah; Taylor, Brian; Shor, Alexander N.
1992-05-01
A 1990 sidescan sonar survey in the eastern region of the Nova-Canton Trough mapped 138°-striking abyssal-hill fabric trending into 70°-striking trough structures. The location and angle of intersection of the abyssal hills with the eastern Nova-Canton Trough effectively disprove a spreading-center origin of this feature. Free-air gravity anomalies derived from satellite altimetry data show continuity, across the Line Islands, of the Nova-Canton Trough with the Clipperton Fracture Zone. The Canton-Clipperton trend is copolar, about a pole at 30°S, 152°W, with other coeval Pacific-Farallon fracture-zone segments, from the Pau to Marquesas fracture zones. This copolarity leads us to postulate a Pacific-Farallon spreading pattern for the magnetic quiet zone region north and east of the Manihiki Plateau, with the Nova-Canton Trough originating as a transform fault in this system.
Computer image processing in marine resource exploration
NASA Technical Reports Server (NTRS)
Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.
1976-01-01
Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.
Sea-floor geology and character offshore of Rocky Point, New York
Poppe, L.J.; McMullen, K.Y.; Ackerman, S.D.; Blackwood, D.S.; Irwin, B.J.; Schaer, J.D.; Lewit, P.G.; Doran, E.F.
2010-01-01
The U.S. Geological Survey (USGS), the Connecticut Department of Environmental Protection, and the National Oceanic and Atmospheric Administration (NOAA) have been working cooperatively to interpret surficial sea-floor geology along the coast of the Northeastern United States. NOAA survey H11445 in eastern Long Island Sound, offshore of Plum Island, New York, covers an area of about 12 square kilometers. Multibeam bathymetry and sidescan-sonar imagery from the survey, as well as sediment and photographic data from 13 stations occupied during a USGS verification cruise are used to delineate sea-floor features and characterize the environment. Bathymetry gradually deepens offshore to over 100 meters in a depression in the northwest part of the study area and reaches 60 meters in Plum Gut, a channel between Plum Island and Orient Point. Sand waves are present on a shoal north of Plum Island and in several smaller areas around the basin. Sand-wave asymmetry indicates that counter-clockwise net sediment transport maintains the shoal. Sand is prevalent where there is low backscatter in the sidescan-sonar imagery. Gravel and boulder areas are submerged lag deposits produced from the Harbor Hill-Orient Point-Fishers Island moraine segment and are found adjacent to the shorelines and just north of Plum Island, where high backscatter is present in the sidescan-sonar imagery.
Optimization of Adaboost Algorithm for Sonar Target Detection in a Multi-Stage ATR System
NASA Technical Reports Server (NTRS)
Lin, Tsung Han (Hank)
2011-01-01
JPL has developed a multi-stage Automated Target Recognition (ATR) system to locate objects in images. First, input images are preprocessed and sent to a Grayscale Optical Correlator (GOC) filter to identify possible regions-of-interest (ROIs). Second, feature extraction operations are performed using Texton filters and Principal Component Analysis (PCA). Finally, the features are fed to a classifier, to identify ROIs that contain the targets. Previous work used the Feed-forward Back-propagation Neural Network for classification. In this project we investigate a version of Adaboost as a classifier for comparison. The version we used is known as GentleBoost. We used the boosted decision tree as the weak classifier. We have tested our ATR system against real-world sonar images using the Adaboost approach. Results indicate an improvement in performance over a single Neural Network design.
A micro-Doppler sonar for acoustic surveillance in sensor networks
NASA Astrophysics Data System (ADS)
Zhang, Zhaonian
Wireless sensor networks have been employed in a wide variety of applications, despite the limited energy and communication resources at each sensor node. Low power custom VLSI chips implementing passive acoustic sensing algorithms have been successfully integrated into an acoustic surveillance unit and demonstrated for detection and location of sound sources. In this dissertation, I explore active and passive acoustic sensing techniques, signal processing and classification algorithms for detection and classification in a multinodal sensor network environment. I will present the design and characterization of a continuous-wave micro-Doppler sonar to image objects with articulated moving components. As an example application for this system, we use it to image gaits of humans and four-legged animals. I will present the micro-Doppler gait signatures of a walking person, a dog and a horse. I will discuss the resolution and range of this micro-Doppler sonar and use experimental results to support the theoretical analyses. In order to reduce the data rate and make the system amenable to wireless sensor networks, I will present a second micro-Doppler sonar that uses bandpass sampling for data acquisition. Speech recognition algorithms are explored for biometric identifications from one's gait, and I will present and compare the classification performance of the two systems. The acoustic micro-Doppler sonar design and biometric identification results are the first in the field as the previous work used either video camera or microwave technology. I will also review bearing estimation algorithms and present results of applying these algorithms for bearing estimation and tracking of moving vehicles. Another major source of the power consumption at each sensor node is the wireless interface. To address the need of low power communications in a wireless sensor network, I will also discuss the design and implementation of ultra wideband transmitters in a three dimensional silicon on insulator process. Lastly, a prototype of neuromorphic interconnects using ultra wideband radio will be presented.
NASA Astrophysics Data System (ADS)
Mayer, L. A.; Calder, B.; Schmidt, J. S.
2003-12-01
Historically, archaeological investigations use sidescan sonar and marine magnetometers as initial search tools. Targets are then examined through direct observation by divers, video, or photographs. Magnetometers can demonstrate the presence, absence, and relative susceptibility of ferrous objects but provide little indication of the nature of the target. Sidescan sonar can present a clear image of the overall nature of a target and its surrounding environment, but the sidescan image is often distorted and contains little information about the true 3-D shape of the object. Optical techniques allow precise identification of objects but suffer from very limited range, even in the best of situations. Modern high-resolution multibeam sonar offers an opportunity to cover a relatively large area from a safe distance above the target, while resolving the true three-dimensional (3-D) shape of the object with centimeter-level resolution. The combination of 3-D mapping and interactive 3-D visualization techniques provides a powerful new means to explore underwater artifacts. A clear demonstration of the applicability of high-resolution multibeam sonar to wreck and artifact investigations occurred when the Naval Historical Center (NHC), the Center for Coastal and Ocean Mapping (CCOM) at the University of New Hampshire, and Reson Inc., collaborated to explore the state of preservation and impact on the surrounding environment of a series of wrecks located off the coast of Normandy, France, adjacent to the American landing sectors The survey augmented previously collected magnetometer and high-resolution sidescan sonar data using a Reson 8125 high-resolution focused multibeam sonar with 240, 0.5° (at nadir) beams distributed over a 120° swath. The team investigated 21 areas in water depths ranging from about three -to 30 meters (m); some areas contained individual targets such as landing craft, barges, a destroyer, troop carrier, etc., while others contained multiple smaller targets such as tanks and trucks. Of particular interest were the well-preserved caissons and blockships of the artificial Mulberry Harbor deployed off Omaha Beach. The near-field beam-forming capability of the Reson 8125 combined with 3-D visualization techniques provided an unprecedented level of detail including the ability to recognize individual components of the wrecks (ramps, gun turrets, hatches, etc.), the state of preservation of the wrecks, and the impact of the wrecks on the surrounding seafloor. Visualization of these data on the GeoWall allows us to share the exploration of these important historical artifacts with both experts and the general public.
High Resolution Quaternary Seismic Stratigraphy of the New York Bight Continental Shelf
Schwab, William C.; Denny, J.F.; Foster, D.S.; Lotto, L.L.; Allison, M.A.; Uchupi, E.; Swift, B.A.; Danforth, W.W.; Thieler, E.R.; Butman, Bradford
2003-01-01
A principal focus for the U.S. Geological Survey (USGS) Coastal and Marine Geology Program (marine.usgs.gov) is regional reconnaissance mapping of inner-continental shelf areas, with initial emphasis on heavily used areas of the sea floor near major population centers. The objectives are to develop a detailed regional synthesis of the sea-floor geology in order to provide information for a wide range of management decisions and to form a basis for further investigations of marine geological processes. In 1995, the USGS, in cooperation with the U.S. Army Corps of Engineers (USACOE), New York District, began to generate reconnaissance maps of the continental shelf seaward of the New York - New Jersey metropolitan area. This mapping encompassed the New York Bight inner-continental shelf, one of the most heavily trafficked and exploited coastal regions in the United States. Contiguous areas of the Hudson Shelf Valley, the largest physiographic feature on this segment of the continental shelf, also were mapped as part of a USGS study of contaminated sediments (Buchholtz ten Brink and others, 1994; 1996). The goal of the reconnaissance mapping was to provide a regional synthesis of the sea-floor geology in the New York Bight area, including: (a) a description of sea-floor morphology; (b) a map of sea-floor sedimentary lithotypes; (c) the geometry and structure of the Cretaceous strata and Quaternary deposits; and (d) the geologic history of the region. Pursuing the course of this mapping effort, we obtained sidescan-sonar images of 100 % of the sea floor in the study area. Initial interpretations of these sidescan data were presented by Schwab and others, (1997a, 1997b, 2000a). High-resolution seismic-reflection profiles collected along each sidescan-sonar line used multiple acoustic sources (e.g., watergun, CHIRP, Geopulse). Multibeam swath-bathymetry data also were obtained for a portion of the study area (Butman and others, 1998;). In this report, we present a series of structural and sediment isopach maps and interpretations of the Quaternary evolution of the inner-continental shelf off the New York - New Jersey metropolitan area based on subbottom, sidescan-sonar, and multibeam-bathymetric data.
NASA Astrophysics Data System (ADS)
Nishimura, Kiyokazu; Kisimoto, Kiyoyuki; Joshima, Masato; Arai, Kohsaku
In the deep-sea geological survey, good survey results are difficult to obtain by a conventional surface-towed acoustic survey system, because the horizontal resolution is limited due to the long distance between the sensor and the target (seafloor). In order to improve the horizontal resolution, a deep-tow system, which tows the sensor in the vicinity of seafloor, is most practical, and many such systems have been developed and used until today. It is not easy, however, to carry out a high-density survey in a small area by maneuvering the towing body altitude sufficiently close to the seafloor with rugged topography. A ROV (Remotely Operated Vehicle) can be used to solve this problem. The ROV makes a high-density 2D survey feasible because of its maneuverability, although a long-distance survey is difficult with it. Accordingly, we have developed an acoustic survey system installed on a ROV. The system named DAIPACK (Deep-sea Acoustic Imaging Package) consists of (1) a deep-sea sub-bottom profiler and (2) a deep-sea sidescan sonar. (1) Deep-sea sub-bottom profiler A light-weight and compact sub-bottom profiler for shallow water was chosen to improve and repackage for the deep sea usage. The system is composed of three units; a transducer, an electronic unit and a notebook computer for system control and data acquisition. The source frequency is 10kHz. To convert the system for the deep sea, the transducer was exchanged for the deep sea model, and the electronic unit was improved accordingly. The electronic unit and the notebook computer were installed in a spherical pressure vessel. (2) Deep-sea sidescan sonar We remodeled a compact shallow sea sidescan sonar(water depth limitation is 30m ) into a deep sea one. This sidescan sonar is composed of a sonar towfish (transducers and an electronic unit ), a cable and a notebook computer (data processor). To accommodate in the deep water, the transducers were remodeled into a high pressure resistance type, and the electronic unit and the computer unit were stored in a spherical pressure vessel. The frequency output of the sidescan sonar is 330kHz, and the ranging distance is variable from 15m to 120m (one side).
Surficial geology of the sea floor in Long Island Sound offshore of Plum Island, New York
McMullen, K.Y.; Poppe, L.J.; Danforth, W.W.; Blackwood, D.S.; Schaer, J.D.; Ostapenko, A.J.; Glomb, K.A.; Doran, E.F.
2010-01-01
The U.S. Geological Survey (USGS), the Connecticut Department of Environmental Protection, and the National Oceanic and Atmospheric Administration (NOAA) have been working cooperatively to interpret surficial sea-floor geology along the coast of the Northeastern United States. NOAA survey H11445 in eastern Long Island Sound, offshore of Plum Island, New York, covers an area of about 12 square kilometers. Multibeam bathymetry and sidescan-sonar imagery from the survey, as well as sediment and photographic data from 13 stations occupied during a USGS verification cruise are used to delineate sea-floor features and characterize the environment. Bathymetry gradually deepens offshore to over 100 meters in a depression in the northwest part of the study area and reaches 60 meters in Plum Gut, a channel between Plum Island and Orient Point. Sand waves are present on a shoal north of Plum Island and in several smaller areas around the basin. Sand-wave asymmetry indicates that counter-clockwise net sediment transport maintains the shoal. Sand is prevalent where there is low backscatter in the sidescan-sonar imagery. Gravel and boulder areas are submerged lag deposits produced from the Harbor Hill-Orient Point-Fishers Island moraine segment and are found adjacent to the shorelines and just north of Plum Island, where high backscatter is present in the sidescan-sonar imagery.
Interference fringes on GLORIA side-scan sonar images from the Bering Sea and their implications
Huggett, Q.J.; Cooper, A. K.; Somers, M.L.; Stubbs, A.R.
1992-01-01
GLORIA side-scan sonographs from the Bering Sea Basin show a complex pattern of interference fringes sub-parallel to the ship's track. Surveys along the same trackline made in 1986 and 1987 show nearly identical patterns. It is concluded from this that the interference patterns are caused by features in the shallow subsurface rather than in the water column. The fringes are interpreted as a thin-layer interference effect that occurs when some of the sound reaching the seafloor passes through it and is reflected off a subsurface layer. The backscattered sound interferes (constructively or desctructively) with the reflected sound. Constructive/destructive interference occurs when the difference in the length of the two soundpaths is a whole/half multiple of GLORIA's 25 cm wavelength. Thus as range from the ship increases, sound moves in and out of phase causing bands of greater and lesser intensity on the GLORIA sonograph. Fluctuations (or 'wiggles') of the fringes on the GLORIA sonographs relate to changes in layer thickness. In principle, a simple three dimensional image of the subsurface layer may be obtained using GLORIA and bathymetric data from adjacent (parallel) ship's tracks. These patterns have also been identified in images from two other systems; SeaMARC II (12 kHz) long-range sonar, and TOBI (30 kHz) deep-towed sonar. In these, and other cases world-wide, the fringes do not appear with the same persistence as those seen in the Bering Sea. ?? 1992 Kluwer Academic Publishers.
Three-Dimensional Ultrasonic Imaging Of The Cornea
NASA Technical Reports Server (NTRS)
Heyser, Rrichar C.; Rooney, James A.
1988-01-01
Proposed technique generates pictures of curved surfaces. Object ultrasonically scanned in raster pattern generated by scanning transmitter/receiver. Receiver turned on at frequent intervals to measure depth variations of scanned object. Used for medical diagnoses by giving images of small curved objects as cornea. Adaptable to other types of reflection measurementsystems such as sonar and radar.
Experimental verification of an interpolation algorithm for improved estimates of animal position
NASA Astrophysics Data System (ADS)
Schell, Chad; Jaffe, Jules S.
2004-07-01
This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied ``ex post facto'' to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.
NASA Astrophysics Data System (ADS)
Allen, R. L.
2016-12-01
Computer enhancing of side scanning sonar plots revealed images of massive art, apparent ruins of cities, and subsea temples. Some images are about four to twenty kilometers in length. Present water depths imply that many of the finds must have been created over ten thousand years ago. Also, large carvings of giant sloths, Ice Age elk, mammoths, mastodons, and other cold climate creatures concurrently indicate great age. In offshore areas of North America, some human faces have beards and what appear to be Caucasian characteristics that clearly contrast with the native tribal images. A few images have possible physical appearances associated with Polynesians. Contacts and at least limited migrations must have occurred much further in the ancient past than previously believed. Greatly rising sea levels and radical changes away from late Ice Age climates had to be devastating to very ancient civilizations. Many images indicate that these cultures were capable of construction and massive art at or near the technological level of the Old Kingdom in Egypt. Paleo astronomy is obvious in some plots. Major concerns are how to further evaluate, catalog, protect, and conserve the creations of those cultures.
Young, K.K.; Wilkes, R.J.
1995-11-21
A transponder of an active digital sonar system identifies a multifrequency underwater activating sonar signal received from a remote sonar transmitter. The transponder includes a transducer that receives acoustic waves, including the activating sonar signal, and generates an analog electrical receipt signal. The analog electrical receipt signal is converted to a digital receipt signal and cross-correlated with a digital transmission signal pattern corresponding to the activating sonar signal. A relative peak in the cross-correlation value is indicative of the activating sonar signal having been received by the transponder. In response to identifying the activating sonar signal, the transponder transmits a responding multifrequency sonar signal. 4 figs.
Young, Kenneth K.; Wilkes, R. Jeffrey
1995-01-01
A transponder of an active digital sonar system identifies a multifrequency underwater activating sonar signal received from a remote sonar transmitter. The transponder includes a transducer that receives acoustic waves, including the activating sonar signal, and generates an analog electrical receipt signal. The analog electrical receipt signal is converted to a digital receipt signal and cross-correlated with a digital transmission signal pattern corresponding to the activating sonar signal. A relative peak in the cross-correlation value is indicative of the activating sonar signal having been received by the transponder. In response to identifying the activating sonar signal, the transponder transmits a responding multifrequency sonar signal.
Microprocessor-based interface for oceanography
NASA Technical Reports Server (NTRS)
Hansen, G. R.
1979-01-01
Ocean floor imaging system incorporates five identical microprocessor-based interface units each assigned to specific sonar instrument to simplify system. Central control module based on same microprocessor eliminates need for custom tailoring hardware interfaces for each instrument.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, K.K.; Wilkes, R.J.
1995-11-21
A transponder of an active digital sonar system identifies a multifrequency underwater activating sonar signal received from a remote sonar transmitter. The transponder includes a transducer that receives acoustic waves, including the activating sonar signal, and generates an analog electrical receipt signal. The analog electrical receipt signal is converted to a digital receipt signal and cross-correlated with a digital transmission signal pattern corresponding to the activating sonar signal. A relative peak in the cross-correlation value is indicative of the activating sonar signal having been received by the transponder. In response to identifying the activating sonar signal, the transponder transmits amore » responding multifrequency sonar signal. 4 figs.« less
Sidescan sonar as a tool for detection of demersal fish habitats
Able, Kenneth W.; Twichell, David C.; Grimes, Churchill B.; Jones, R. S.
1987-01-01
Sidescan sonar can be an effective tool for the determination of the habitat distribution of commercially important species. This technique has the advantage of rapidly mapping large areas of the seafloor. Sidescan images (sonographs) may also help to identify appropriate fishing gears for different types of seafloor or areas to be avoided with certain types of gears. During the early stages of exploration, verification of sidescan sonar sonographs is critical to successful identification of important habitats. Tilefishes (Lopholatilus and Caulolatilus) are especially good target species because the construct large burrows in the seafloor or live around boulders, both of which are easily detectable on sonographs. In some special circumstances the estimates of tilefish burrow densities from sonographs can be used to estimate standing stock. In many localities the burrow and boulder habitats of tilefish are shared with other commercially important species such as American lobsters, Homarus americanus; cusk, Brosme brosme; and ocean pout, Macrozoarces americanus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollinger, Geoffrey
This document presents results from tests to demonstrate underwater mapping capabilities of an underwater vehicle in conditions typically found in marine renewable energy arrays. These tests were performed with a tethered Seabotix vLBV300 underwater vehicle. The vehicle is equipped with an inertial navigation system (INS) based on a Gladiator Landmark 40 IMU and Teledyne Explorer Doppler Velocity Log, as well as a Gemini 720i scanning sonar acquired from Tritech. The results presented include both indoor pool and offshore deployments. The indoor pool deployments were performed on October 7, 2016 and February 3, 2017 in Corvallis, OR. The offshore deployment wasmore » performed on April 20, 2016 off the coast of Newport, OR (44.678 degrees N, 124.109 degrees W). During the mission period, the sea state varied between 3 and 4, with an average significant wave height of 1.6 m. Data was recorded from both the INS and the sonar.« less
Technologies for Positioning and Placement of Underwater Structures
2000-03-01
for imaging the bottom immediately before placement of the structure. c. Use passive sensors (such as tiltmeters , inclinometers, and gyrocompasses...4 Acoustic Sensors .................................................................... 5 Multibeamn and Side-Scan Sonar Transducers...11.I Video Camera....................................................................11. Passive Sensors
Multi-Beam Sonar Infrastructure Mapping Research
DOT National Transportation Integrated Search
2017-10-01
The hydraulics unit in MnDOTs bridge office applied for a research grant to develop in-house underwater acoustic 3D imaging capabilities. This research report presents both stationary and mobile scanning techniques, outlines the setup of both syst...
A Detailed Study of Sonar Tomographic Imaging
2013-08-01
BPA ) to form an object image. As the data is collected radially about the axis of rotation, one computation method computes an inverse Fourier...images are not quite as sharp. It is concluded UNCLASSIFIED iii DSTO–RR–0394 UNCLASSIFIED that polar BPA processing requires an appropriate choice of...attenuation factor to reduce the effect of the specular reflections, while for the 2DIFT BPA approach the degrading effect from these reflections is
Sonar gas seepage characterization using high resolution systems at short ranges
NASA Astrophysics Data System (ADS)
Schneider von Deimling, J.; Lohrberg, A.; Mücke, I.
2017-12-01
Sonar is extremely sensitive in regard to submarine remote sensing of free gas bubbles. Known reasons for this are (1) high impedance contrast between water and gas, holding true also at larger depths with higher hydrostatic pressures and thus greater mole density in a gas bubble; (2) resonating behavior at a specific depth-frequency-size/shape relation with highly non-linear behavior; (3) an overlooked property being valuable for gas seepage detection and characterization is the movement of bubbles controlled by their overall trajectory governed by buoyancy, upwelling effects, tides, eddies, and currents. Moving objects are an unusual seismo-acoustic target in solid earth geophysics, and most processors hardly consider such short term movement. However, analyzing movement pattern over time and space highly improves human and algorithmic bubble detection and helps mitigation of false alarms often caused by fish's swim bladders. We optimized our sonar surveys for gas bubble trajectory analyses using calibrated split-beam and broadband/short pulse multibeam to gather very high quality sonar images. Thus we present sonar data patterns of gas seepage sites recorded at shorter ranges showing individual bubbles or groups of bubbles. Subsequent analyses of bubble trajectories and sonar strength can be used to quantify minor gas fluxes with high accuracy. Moreover, we analyzed strong gas bubble seepage sites with significant upwelling. Acoustic inversion of such major seep fluxes is extremely challenging if not even impossible given uncertainties in bubble size spectra, upwelling velocities, and beam geometry position of targets. Our 3D analyses of the water column multibeam data unraveled that some major bubble flows prescribe spiral vortex trajectories. The phenomenon was first found at an abandoned well site in the North Sea, but our recent investigations confirm such complex bubble trajectories exist at natural seeps, i.e. at the CO2 seep site Panarea (Italy). We hypothesize that accurate 3D analyses of plume shape and trajectory analyses might help to estimate threshold for fluxes.
NASA Technical Reports Server (NTRS)
Hansen, G. R.
1983-01-01
Sonars are usually designed and constructed as stand alone instruments. That is, all elements or subsystems of the sonar are provided: power conditioning, displays, intercommunications, control, receiver, transmitter, and transducer. The sonars which are a part of the Advanced Ocean Test Development Platform (AOTDP) represent a departure from this manner of implementation and are configured more like an instrumentation system. Only the transducer, transmitter, and receiver which are unique to a particular sonar function; Up, Down, Side Scan, exist as separable subsystems. The remaining functions are reserved to the AOTDP and serve all sonars and other instrumentation in a shared manner. The organization and functions of the common AOTDP elements were described and then the interface with the sonars discussed. The techniques for software control of the sonar parameters were explained followed by the details of the realization of the sonar functions and some discussion of the performance of the side scan sonars.
NASA Technical Reports Server (NTRS)
1976-01-01
Stanford University cardiologists, with the help of Ames engineers, have validated the operation of the echo-cardioscope to monitor cardiac functions of astronauts in flight. This device forms images of internal structures using high-frequency sound. The instrument is compact, lightweight, portable, and DC powered for safety. The battery powered ultrasonic device, being isolated from its electrical environment, has an inherent safety advantage especially with infants.
NASA Astrophysics Data System (ADS)
Violet, J. A.; Sheets, B. A.; Paola, C.; Pratson, L. F.; Parker, G.
2002-12-01
We illustrate further research results on the transport and deposition of sediment by turbidity currents in an experimental basin, designed to model salt-withdrawal minibasins found along the northern continental slope of the Gulf of Mexico. The experiment was performed in 2001 in the subsiding EXperimental EarthScape facility (XES) at St. Anthony Falls Laboratory, University of Minnesota. The run consisted of two stages that each contained the same sequence of events, which were of three different variations (1.85-minute pulses of 1.5 liters/s discharges, 3.8-minute pulses of 4.5 liters/s discharges, or 36 minute events of 1.5 liters/s discharges). The sediment comprised three grades of silica with nominal diameters of 20 microns (45%), 45 microns (40%) and 110 microns (15%) and all flows had a volume concentration of sediment of 5%. The only difference between stage I and II was that no subsidence occurred during stage II, and that the 110 micron sand was removed from the flows late in stage II to study the effects of a smaller mean flow-grainsize. Research since the run has focused on the correction of high-frequency sonar data taken during the run, digital photography taken of dried deposit stratigraphy and grainsize data also taken at various locations in the dried deposit. The sonar data is utilized in the creation of post-event topographies and isopach maps to illustrate what the controls on erosion, deposition, flow path, deposit thickness and even the channelization of early flow events are. Comparisons of the stratigraphy and the grainsize data with the conclusions from the sonar data are made, as sonar is also constructed in a manner that exhibits synthetic or predicted stratigraphy (before compaction). Finally the stratigraphy is structurally described in the proximal, medial, and distal segments of the deposit and comparisons to the field are made.
Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs
NASA Technical Reports Server (NTRS)
Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen
2015-01-01
An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.
Sonar imaging of flooded subsurface voids phase I : proof of concept.
DOT National Transportation Integrated Search
2011-04-15
Damage to Ohio highways due to subsidence or collapse of subsurface voids is a serious problem : for the Office of Geotechnical Engineering (OGE) at the Ohio Department of Transportation : (ODOT). These voids have often resulted from past underground...
Increasing global accessibility and understanding of water column sonar data
NASA Astrophysics Data System (ADS)
Wall, C.; Anderson, C.; Mesick, S.; Parsons, A. R.; Boyer, T.; McLean, S. J.
2016-02-01
Active acoustic (sonar) technology is of increasing importance for research examining the water column. NOAA uses water column sonar data to map acoustic properties from the ocean surface to the seafloor - from bubbles to biology to bottom. Scientific echosounders aboard fishery survey vessels are used to estimate biomass, measure fish school morphology, and characterize habitat. These surveys produce large volumes of data that are costly and difficult to maintain due to their size, complexity, and proprietary format that require specific software and extensive knowledge. However, through proper management they can deliver valuable information beyond their original collection purpose. In order to maximize the benefit to the public, the data must be easily discoverable and accessible. Access to ancillary data is also needed for complete environmental context and ecosystem assessment. NOAA's National Centers for Environmental Information, in partnership with NOAA's National Marine Fisheries Service and the University of Colorado, created a national archive for the stewardship and distribution of water column sonar data collected on NOAA and academic vessels. A web-based access page allows users to query the metadata and access the raw sonar data. Visualization products being developed allow researchers and the public to understand the quality and content of large volumes of archived data more easily. Such products transform the complex data into a digestible image or graphic and are highly valuable for a broad audience of varying backgrounds. Concurrently collected oceanographic data and bathymetric data are being integrated into the data access web page to provide an ecosystem-wide understanding of the area ensonified. Benefits of the archive include global access to an unprecedented nationwide dataset and the increased potential for researchers to address cross-cutting scientific questions to advance the field of marine ecosystem acoustics.
Bathymetry and acoustic backscatter data collected in 2010 from Cat Island, Mississippi
Buster, Noreen A.; Pfeiffer, William R.; Miselis, Jennifer L.; Kindinger, Jack G.; Wiese, Dana S.; Reynolds, B.J.
2012-01-01
Scientists from the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center (SPCMSC), in collaboration with the U.S. Army Corps of Engineers (USACE), conducted geophysical and sedimentological surveys around Cat Island, the westernmost island in the Mississippi-Alabama barrier island chain (fig. 1). The objectives of the study were to understand the geologic evolution of Cat Island relative to other barrier islands in the northern Gulf of Mexico and to identify relationships between the geologic history, present day morphology, and sediment distribution. This report contains data from the bathymetry and side-scan sonar portion of the study collected during two geophysical cruises. Interferometric swath bathymetry and side-scan sonar data were collected aboard the RV G.K. Gilbert September 7-15, 2010. Single-beam bathymetry was collected in shallow water around the island (< 2 meter (m)) from the RV Streeterville from September 28 to October 2, 2010, to cover the data gap between the landward limit of the previous cruise and the shoreline. This report serves as an archive of processed interferometric swath and single-beam bathymetry and side scan sonar data. GIS data products include a 50-m cell size interpolated gridded bathymetry surface, trackline maps, and an acoustic side-scan sonar image. Additional files include error analysis maps, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FDGC) metadata.
Grote, Ann B.; Bailey, Michael M.; Zydlewski, Joseph D.; Hightower, Joseph E.
2014-01-01
We investigated the fish community approaching the Veazie Dam on the Penobscot River, Maine, prior to implementation of a major dam removal and river restoration project. Multibeam sonar (dual-frequency identification sonar, DIDSON) surveys were conducted continuously at the fishway entrance from May to July in 2011. A 5% subsample of DIDSON data contained 43 793 fish targets, the majority of which were of Excellent (15.7%) or Good (73.01%) observation quality. Excellent quality DIDSON targets (n = 6876) were apportioned by species using a Bayesian mixture model based on four known fork length distributions (river herring (alewife,Alosa psuedoharengus, and blueback herring, Alosa aestivalis), American shad, Alosa sapidissima) and two size classes (one sea-winter and multi-sea-winter) of Atlantic salmon (Salmo salar). 76.2% of targets were assigned to the American shad distribution; Atlantic salmon accounted for 15.64%, and river herring 8.16% of observed targets. Shad-sized (99.0%) and salmon-sized (99.3%) targets approached the fishway almost exclusively during the day, whereas river herring-sized targets were observed both during the day (51.1%) and at night (48.9%). This approach demonstrates how multibeam sonar imaging can be used to evaluate community composition and species-specific movement patterns in systems where there is little overlap in the length distributions of target species.
Texture as a basis for acoustic classification of substrate in the nearshore region
NASA Astrophysics Data System (ADS)
Dennison, A.; Wattrus, N. J.
2016-12-01
Segmentation and classification of substrate type from two locations in Lake Superior, are predicted using multivariate statistical processing of textural measures derived from shallow-water, high-resolution multibeam bathymetric data. During a multibeam sonar survey, both bathymetric and backscatter data are collected. It is well documented that the statistical characteristic of a sonar backscatter mosaic is dependent on substrate type. While classifying the bottom-type on the basis on backscatter alone can accurately predict and map bottom-type, it lacks the ability to resolve and capture fine textural details, an important factor in many habitat mapping studies. Statistical processing can capture the pertinent details about the bottom-type that are rich in textural information. Further multivariate statistical processing can then isolate characteristic features, and provide the basis for an accurate classification scheme. Preliminary results from an analysis of bathymetric data and ground-truth samples collected from the Amnicon River, Superior, Wisconsin, and the Lester River, Duluth, Minnesota, demonstrate the ability to process and develop a novel classification scheme of the bottom type in two geomorphologically distinct areas.
NASA Astrophysics Data System (ADS)
Petersen, C.; Klaucke, I.; Weinrebe, W.
2006-12-01
The oceanic crust off central Costa Rica northwest of the Cocos Ridge is dominated by chains of seamounts rising 1-2 km above the seafloor with diameters of up to 20 km. The subduction of these seamounts leads to strong indentations, scars and slides on the continental margin. A smoother segment of about 80 km width is located offshore Nicoya peninsula. The segment ends at a fracture zone which marks the transition of oceanic crust created at the Cocos-Nazca spreading center (CNS) and at the East Pacific Rise (EPR). Offshore Nicaragua the incoming EPR crust is dominated by bending related faults. To investigate the relationship between subduction erosion, fluid venting and mound formation, multibeam bathymetry and high-resolution deep-tow sidescan sonar and sediment echosounder data were acquired during R/V Sonne cruises SO163 and SO173 (2002/2003). The deep-tow system consisted of a dual-frequency 75/410 kHz sidescan sonar and a 2-12 kHz chirp sub-bottom profiler. The connection of the observed seafloor features to deeper subduction related processes is obtained by analysis of multi-channel streamer (MCS) data acquired during cruises SO81 (1992) and BGR99 (1999). Data examples and interpretations for different settings along the margin are presented. Near the Fisher seamount the large Nicoya slump failed over the flank of a huge subducted seamount. The sidescan and echosounder data permit a detailed characterization of fault patterns and fluid escape structures around the headwall of the slump. Where the fracture zone separating CNS and EPR crust subducts, the Hongo mound field was mapped in detail. Several mounds of up to 100 m height are located in line with a scar possibly created by a subducting ridge of the fracture zone. MCS data image a topographic high on the subducting oceanic crust beneath the mound field which lead to uplift and possibly enabled ascent of fluids from the subducting plate. The combined analysis of geoacoustic and seismic MCS data confirms that fracturing of the continental slope by subducting oceanic relief is a major mechanism which causes the opening of pathways for fluids to migrate upwards.
Sonar imaging of flooded subsurface voids phase I : proof of concept : executive summary report.
DOT National Transportation Integrated Search
2011-04-15
Damage to Ohio highways due to subsidence : or collapse of subsurface voids is a serious : problem for the Ohio Department of : Transportation (ODOT). These voids have : often resulted from past underground mining : activities for coal, clay, limesto...
Imaging fall Chinook salmon redds in the Columbia River with a dual-frequency identification sonar
Tiffan, K.F.; Rondorf, D.W.; Skalicky, J.J.
2004-01-01
We tested the efficacy of a dual-frequency identification sonar (DIDSON) for imaging and enumeration of fall Chinook salmon Oncorhynchus tshawytscha redds in a spawning area below Bonneville Dam on the Columbia River. The DIDSON uses sound to form near-video-quality images and has the advantages of imaging in zero-visibility water and possessing a greater detection range and field of view than underwater video cameras. We suspected that the large size and distinct morphology of a fall Chinook salmon redd would facilitate acoustic imaging if the DIDSON was towed near the river bottom so as to cast an acoustic shadow from the tailspill over the redd pocket. We tested this idea by observing 22 different redds with an underwater video camera, spatially referencing their locations, and then navigating to them while imaging them with the DIDSON. All 22 redds were successfully imaged with the DIDSON. We subsequently conducted redd searches along transects to compare the number of redds imaged by the DIDSON with the number observed using an underwater video camera. We counted 117 redds with the DIDSON and 81 redds with the underwater video camera. Only one of the redds observed with the underwater video camera was not also documented by the DIDSON. In spite of the DIDSON's high cost, it may serve as a useful tool for enumerating fall Chinook salmon redds in conditions that are not conducive to underwater videography.
Behavioral responses by grey seals (Halichoerus grypus) to high frequency sonar.
Hastie, Gordon D; Donovan, Carl; Götz, Thomas; Janik, Vincent M
2014-02-15
The use of high frequency sonar is now commonplace in the marine environment. Most marine mammals rely on sound to navigate, and for detecting prey, and there is the potential that the acoustic signals of sonar could cause behavioral responses. To investigate this, we carried out behavioral response tests with grey seals to two sonar systems (200 and 375 kHz systems). Results showed that both systems had significant effects on the seals behavior; when the 200 kHz sonar was active, seals spent significantly more time hauled out and, although seals remained swimming during operation of the 375 kHz sonar, they were distributed further from the sonar. The results show that although peak sonar frequencies may be above marine mammal hearing ranges, high levels of sound can be produced within their hearing ranges that elicit behavioral responses; this has clear implications for the widespread use of sonar in the marine environment. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Heinrich, C.; Feldens, P.; Schwarzer, K.
2017-06-01
Hydroacoustic surveys are common tools for habitat investigation and monitoring that aid in the realisation of the aims of the EU Marine Directives. However, the creation of habitat maps is difficult, especially when benthic organisms densely populate the seafloor. This study assesses the sensitivity of entropy and homogeneity image texture parameters derived from backscatter strength data to benthic habitats dominated by the tubeworm Lanice conchilega. Side scan sonar backscatter surveys were carried out in 2010 and 2011 in the German Bight (southern North Sea) at two sites approx. 20 km offshore of the island of Sylt. Abiotic and biotic seabed facies, such as sorted bedforms, areas of fine to medium sand and L. conchilega beds with different tube densities, were identified and characterised based on manual expert analysis and image texture analysis. Ground truthing was performed by grab sampling and underwater video observations. Compared to the manual expert analysis, the k- means classification of image textures proves to be a semi-automated method to investigate small-scale differences in a biologically altered seabed from backscatter data. The texture parameters entropy and homogeneity appear linearly interrelated with tube density, the former positively and the latter negatively. Reinvestigation of one site after 1 year showed an extensive change in the distribution of the L. conchilega-altered seabed. Such marked annual fluctuations in L. conchilega tube cover demonstrate the need for dense time series and high spatial coverage to meaningfully monitor ecological patterns on the seafloor with acoustic backscatter methods in the study region and similar settings worldwide, particularly because the sand mason plays a pivotal role in promoting biodiversity. In this context, image texture analysis provides a cost-effective and reproducible method to track biologically altered seabeds from side scan sonar backscatter signatures.
ELAS: A powerful, general purpose image processing package
NASA Technical Reports Server (NTRS)
Walters, David; Rickman, Douglas
1991-01-01
ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.
The Effects of Towfish Motion on Sidescan Sonar Images: Extension to a Multiple-Beam Device
1994-02-01
simulation, the raw simulated sidescan image is formed from pixels G , which are the sum of energies E,", assigned to the nearest range- bin k as noted in...for stable motion at constant velocity V0, are applied to (divided into) the G ,, and the simulated sidescan image is ready to display. Maximal energy...limitation is likely to apply to all multiple-beam sonais of similar construction. The yaw correction was incorporated in the MBEAM model by an
NASA Astrophysics Data System (ADS)
Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin
2017-06-01
We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.
Enhanced Multistatic Active Sonar via Innovative Signal Processing
2015-09-30
3. DATES COVERED (From - To) Oct. 01, 2014-Sept. 30, 2015 4. TITLE AND SUBTITLE Enhanced Multistatic Active Sonar via Innovative Signal...active sonar (CAS) in the presence of strong direct blast is studied for the Doppler-tolerant linear frequency modulation waveform. A receiver design...beamformer variants is examined. 15. SUBJECT TERMS Pulsed active sonar (PAS), continuous active sonar (CAS), strong delay and Doppler-spread direct blast
Mid-Frequency Sonar Interactions with Beaked Whales
2010-09-30
1 Mid-Frequency Sonar Interactions with Beaked Whales PI Kenneth G. Foote Woods Hole Oceanographic Institution, 98 Water Street, Woods Hole, MA...modeling and visualization system, called the Virtual Beaked Whale, to enable users to predict mid-frequency sonar -induced acoustic fields inside beaked...nature of sonar interactions with beaked whales, and may prove useful in evaluating alternate sonar transmit signals that retain the required
UNDERWATER MAPPING USING GLORIA AND MIPS.
Chavez, Pat S.; Anderson, Jeffrey A.; Schoonmaker, James W.
1987-01-01
Advances in digital image processing of the (GLORIA) Geological Long-Range Induced Asdic) sidescan-sonar image data have made it technically and economically possible to map large areas of the ocean floor including the Exclusive Economic Zone. Software was written to correct both geometric and radiometric distortions that exist in the original raw GLORIA data. A digital mosaicking technique was developed enabling 2 degree by 2 degree quadrangles to be generated.
NASA Astrophysics Data System (ADS)
Miller, N. C.; Brothers, D. S.; Kluesner, J.; Balster-Gee, A.; Ten Brink, U. S.; Andrews, B. D.; Haeussler, P. J.; Watt, J. T.; Dartnell, P.; East, A. E.
2016-12-01
We present high-resolution multi-channel seismic (MCS) images of fault structure and sedimentary stratigraphy along the southeastern Alaska margin, where the northern Queen Charlotte Fault (QCF) cuts the shelf-edge and slope. The QCF is a dominantly strike slip system that forms the boundary between the Pacific (PA) and North American (NA) plates offshore western Canada and southeastern Alaska. The data were collected using a 64 channel, 200 m digital streamer and a 0.75-3 kJ sparker source aboard the R/V Norseman in August 2016. The survey was designed to cross a seafloor fault trace recently imaged by multibeam sonar (see adjacent poster by Brothers et al.) and to extend the subsurface information landward and seaward from the fault. Analysis of these MCS and multibeam data focus on addressing key questions that have significant implications for the kinematic and geodynamic history of the fault, including: Is the imaged surface fault in multibeam sonar the only recently-active fault trace? What is the shallow fault zone width and structure, is the internal structure of the recently-discovered pull-apart basin a dynamically developing structure? How does sediment thickness vary along the margin and how does this variation affect the fault expression? Can previous glacial sequences be identified in the stratigraphy?
Broadband Ultrasonic Transducers
NASA Technical Reports Server (NTRS)
Heyser, R. C.
1986-01-01
New geometry spreads out resonance region of piezoelectric crystal. In new transducer, crystal surfaces made nonparallel. One surface planar; other, concave. Geometry designed to produce nearly uniform response over a predetermined band of frequencies and to attenuate strongly frequencies outside band. Greater bandwidth improves accuracy of sonar and ultrasonic imaging equipment.
Rediscovery and Exploration of Magic Mountain, Explorer Ridge, NE Pacific
NASA Astrophysics Data System (ADS)
Embley, R. W.
2002-12-01
A two-part exploration program at Explorer Ridge, the northernmost spreading segment of the NE Pacific spreading centers, was conducted in two phases during June to August of 2002. A robust hydrothermal system (Magic Mountain) was found in this area in the early 1980s by the Canadian PISCES IV submersible, but its dimensions and geologic relationships were not well determined due to limited dives and poor navigation. The first part of the 2002 exploration program utilized an EM300 multibeam sonar on T. G. Thompson, the autonomous vehicle ABE, and a CTD/rosette system to map the seafloor and conduct hydrothermal plume surveys. While ABE conducted detailed surveys in the area where the most intense hydrothermal plume was found on the initial CTD survey, the T. G. Thompson conducted additional multibeam surveys, CTD casts and CTD tow-yos on the other second order segments up to 60 km away. This increased the efficiency of the expedition by at least 30%. After 12 days on site, a multibeam map was completed of the entire segment, the spatial distribution and character of the hydrothermal plumes were mapped out and a section of seafloor measuring 2 x 5.5 km was mapped in detail with ABE. The ABE used two sonar systems, a previously proven Imagenex pencil beam sonar, and, for the first time, a multibeam sonar (SM2000). In addition to the high-resolution bathymetry (1 m grid-cell size resolution for the SM2000), ABE collected temperature, optical backscatter, eH redox potential, and magnetic field data. Using the CTD and ABE data, a major hydrothermal system was easily located on the seafloor during the second part of the exploration program using the ROPOS remotely operated vehicle. The Magic Mountain hydrothermal system is located almost entirely on the eastern constructional shoulder of the ridge eastward of the rim of the eastern boundary fault of the axial valley. This is in contrast to most other hydrothermal systems on intermediate rate spreading ridges, which are either centered within the neovolcanic zone or associated with a boundary fault. The active venting occurs over at least 400 m along axis and is mostly concentrated in clusters of high temperature chimneys, each about 50 m in diameter. Two of these clusters have a basal sulfide mound. There is obvious structural control of many of the vents - many lie along or in line with distinct fissures or small faults and the entire field appears to have developed within a shallow graben formed on the ridge flank. Most of the chimneys consist of relatively friable sulfates (barite/anhydrite) that vent clear fluid at up to about 290°C. Several larger active chimneys consist primarily of sulfide minerals that emit gray smoke with temperatures as high as 312§ C. Biologic communities were primarily associated with the more stable sulfide structures. The mixture of proven technology used from a capable surface vessel during the 2002 Explorer Ridge program, including a cutting edge deep AUV and a large ROV, provided the tools to explore a little-known site at a full range of scales in a short amount of time and collect invaluable samples for research. These initial data sets from the 2002 exploration program set the stage for more detailed studies of this unique hydrothermal system in the future.
NASA Astrophysics Data System (ADS)
Allen, R. L.
2017-12-01
Enhanced images from subsea sonar scanning of the Western Gulf of Mexico have revealed quite large temples (4 km. in length), ruins of cities (14 km. by 11 km.), pyramids, amphitheaters, and many other structures. Some human faces have beards implying much earlier migrations of Europeans or North Africans. Several temples have paleo astronomy alignments and similarities to Stone Henge. Southern and Southwestern USA satellite land images display characteristics in common with several subsea designs. Water depths indicate that many structures go back about as far as the late Ice Age and are likely to be over ten thousand years old. Chronologies of civilizations, especially in North America will need to be seriously reconsidered. Greatly rising sea levels and radical climate changes must have helped to destroy relatively advanced cultures. Suprisingly deep water depths of many architectures provide evidence for closures within the Gulf of Mexico to open seas. Closures and openings may have influenced ancient radical climate swings between warmth and cooling as Gulf contributions to water temperatures contracted or expanded. These creations of very old and surprisingly advanced civilizations need protection.
Multibeam Formation with a Parametric Sonar
1976-03-05
AD-A022 815 MULTIBEAM FORMATION WITH A PARAMETRIC SONAR Robert L. White Texas University at Austin Prepared for: Office of Naval Research 5 March...PARAMETRIC SONAR Final Report under Contract N00014-70-A-0166, Task 0020 1 February - 31 July 1974 Robe&, L. White OFFICE OF NAVAL RESEARCH Contract N00014...78712 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. r-X: ~ ... ABSTRACT Parametric sonar has proven to be an effective concept in sonar
Aerial ultrasonic micro Doppler sonar detection range in outdoor environments.
Bradley, Marshall; Sabatier, James M
2012-03-01
Current research demonstrates that micro Doppler sonar has the capability to uniquely identify the presence of a moving human, making it an attractive component in surveillance systems for border security applications. Primary environmental factors that limit sonar performance are two-way spreading losses, ultrasonic absorption, and backscattered energy from the ground that appears at zero Doppler shift in the sonar signal processor. Spectral leakage from the backscatter component has a significant effect on sonar performance for slow moving targets. Sonar performance is shown to rapidly decay as the sensor is moved closer to the ground due to increasing surface backscatter levels. © 2012 Acoustical Society of America
Foote, Kenneth G
2012-05-01
Measurement of acoustic backscattering properties of targets requires removal of the range dependence of echoes. This process is called range compensation. For conventional sonars making measurements in the transducer farfield, the compensation removes effects of geometrical spreading and absorption. For parametric sonars consisting of a parametric acoustic transmitter and a conventional-sonar receiver, two additional range dependences require compensation when making measurements in the nonlinearly generated difference-frequency nearfield: an apparently increasing source level and a changing beamwidth. General expressions are derived for range compensation functions in the difference-frequency nearfield of parametric sonars. These are evaluated numerically for a parametric sonar whose difference-frequency band, effectively 1-6 kHz, is being used to observe Atlantic herring (Clupea harengus) in situ. Range compensation functions for this sonar are compared with corresponding functions for conventional sonars for the cases of single and multiple scatterers. Dependences of these range compensation functions on the parametric sonar transducer shape, size, acoustic power density, and hydrography are investigated. Parametric range compensation functions, when applied with calibration data, will enable difference-frequency echoes to be expressed in physical units of volume backscattering, and backscattering spectra, including fish-swimbladder-resonances, to be analyzed.
50 CFR 216.186 - Requirements for reporting.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Low Frequency Active (SURTASS LFA sonar) Sonar § 216.186 Requirements for reporting. (a) The Holder of... of each vessel during each mission; (2) Information on sonar transmissions during each mission; (3... must contain an unclassified analysis of new passive sonar technologies and an assessment of whether...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Frequency Active (SURTASS LFA) Sonar § 218.234 Mitigation. When conducting operations identified in § 218... monitoring. (b) General Operating Procedures: (1) Prior to SURTASS LFA sonar operations, the Navy will... SURTASS LFA sonar signal at a frequency greater than 500 Hertz (Hz). (c) LFA Sonar Mitigation Zone and 1...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Frequency Active (SURTASS LFA) Sonar § 218.234 Mitigation. When conducting operations identified in § 218... monitoring. (b) General Operating Procedures: (1) Prior to SURTASS LFA sonar operations, the Navy will... SURTASS LFA sonar signal at a frequency greater than 500 Hertz (Hz). (c) LFA Sonar Mitigation Zone and 1...
50 CFR 216.186 - Requirements for reporting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Low Frequency Active (SURTASS LFA sonar) Sonar § 216.186 Requirements for reporting. (a) The Holder of... of each vessel during each mission; (2) Information on sonar transmissions during each mission; (3... must contain an unclassified analysis of new passive sonar technologies and an assessment of whether...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Frequency Active (SURTASS LFA) Sonar § 218.234 Mitigation. When conducting operations identified in § 218... monitoring. (b) General Operating Procedures: (1) Prior to SURTASS LFA sonar operations, the Navy will... SURTASS LFA sonar signal at a frequency greater than 500 Hertz (Hz). (c) LFA Sonar Mitigation Zone and 1...
Sensor Management for Tactical Surveillance Operations
2007-11-01
active and passive sonar for submarine and tor- pedo detection, and mine avoidance. [range, bearing] range 1.8 km to 55 km Active or Passive AN/SLQ-501...finding (DF) unit [bearing, classification] maximum range 1100 km Passive Cameras (day- light/ night- vision) ( video & still) Record optical and...infrared still images or motion video of events for near-real time assessment or long term analysis and archiving. Range is limited by the image resolution
Coupled dictionary learning for joint MR image restoration and segmentation
NASA Astrophysics Data System (ADS)
Yang, Xuesong; Fan, Yong
2018-03-01
To achieve better segmentation of MR images, image restoration is typically used as a preprocessing step, especially for low-quality MR images. Recent studies have demonstrated that dictionary learning methods could achieve promising performance for both image restoration and image segmentation. These methods typically learn paired dictionaries of image patches from different sources and use a common sparse representation to characterize paired image patches, such as low-quality image patches and their corresponding high quality counterparts for the image restoration, and image patches and their corresponding segmentation labels for the image segmentation. Since learning these dictionaries jointly in a unified framework may improve the image restoration and segmentation simultaneously, we propose a coupled dictionary learning method to concurrently learn dictionaries for joint image restoration and image segmentation based on sparse representations in a multi-atlas image segmentation framework. Particularly, three dictionaries, including a dictionary of low quality image patches, a dictionary of high quality image patches, and a dictionary of segmentation label patches, are learned in a unified framework so that the learned dictionaries of image restoration and segmentation can benefit each other. Our method has been evaluated for segmenting the hippocampus in MR T1 images collected with scanners of different magnetic field strengths. The experimental results have demonstrated that our method achieved better image restoration and segmentation performance than state of the art dictionary learning and sparse representation based image restoration and image segmentation methods.
50 CFR 216.189 - Renewal of Letters of Authorization.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Array Sensor System Low Frequency Active (SURTASS LFA sonar) Sonar § 216.189 Renewal of Letters of... each SURTASS LFA sonar operation; (3) Timely receipt of the monitoring reports required under § 216.185... comment on the proposed modification. Amending the areas for upcoming SURTASS LFA sonar operations is not...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Frequency Active (SURTASS LFA sonar) Sonar § 216.184 Mitigation. The activity identified in § 216.180(a....54 nm) buffer zone extending beyond the 180-dB zone), SURTASS LFA sonar transmissions will be... active acoustic monitoring described in § 216.185. (c) The high-frequency marine mammal monitoring sonar...
50 CFR 216.189 - Renewal of Letters of Authorization.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Array Sensor System Low Frequency Active (SURTASS LFA sonar) Sonar § 216.189 Renewal of Letters of... each SURTASS LFA sonar operation; (3) Timely receipt of the monitoring reports required under § 216.185... comment on the proposed modification. Amending the areas for upcoming SURTASS LFA sonar operations is not...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-19
... involve underwater explosive detonation, projectile firing, and sonar testing. Summary of Activity Under..., most of the mid-frequency active sonar (MFAS) and high-frequency active sonar (HFAS) testing events... (Number Authorized vs. Conducted). Number Number Sonar system authorized conducted (hrs) (hrs) AN/SQS-53...
50 CFR 218.235 - Requirements for monitoring.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Low Frequency Active (SURTASS LFA) Sonar § 218.235 Requirements for monitoring. (a) The Holder of a...) during operations that employ SURTASS LFA sonar in the active mode. The SURTASS vessels shall have... frequency passive SURTASS sonar to listen for vocalizing marine mammals; and (3) Use the HF/M3 active sonar...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Frequency Active (SURTASS LFA sonar) Sonar § 216.184 Mitigation. The activity identified in § 216.180(a....54 nm) buffer zone extending beyond the 180-dB zone), SURTASS LFA sonar transmissions will be... active acoustic monitoring described in § 216.185. (c) The high-frequency marine mammal monitoring sonar...
50 CFR 218.235 - Requirements for monitoring.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Low Frequency Active (SURTASS LFA) Sonar § 218.235 Requirements for monitoring. (a) The Holder of a...) during operations that employ SURTASS LFA sonar in the active mode. The SURTASS vessels shall have... frequency passive SURTASS sonar to listen for vocalizing marine mammals; and (3) Use the HF/M3 active sonar...
50 CFR 218.235 - Requirements for monitoring.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Low Frequency Active (SURTASS LFA) Sonar § 218.235 Requirements for monitoring. (a) The Holder of a...) during operations that employ SURTASS LFA sonar in the active mode. The SURTASS vessels shall have... frequency passive SURTASS sonar to listen for vocalizing marine mammals; and (3) Use the HF/M3 active sonar...
Remote sensing image segmentation based on Hadoop cloud platform
NASA Astrophysics Data System (ADS)
Li, Jie; Zhu, Lingling; Cao, Fubin
2018-01-01
To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.
Estimating sturgeon abundance in the Carolinas using side-scan sonar
Flowers, H. Jared; Hightower, Joseph E.
2015-01-01
Sturgeons (Acipenseridae) are one of the most threatened taxa worldwide, including species in North Carolina and South Carolina. Populations of Atlantic Sturgeon Acipenser oxyrinchus in the Carolinas have been significantly reduced from historical levels by a combination of intense fishing and habitat loss. There is a need for estimates of current abundance, to describe status, and for estimates of historical abundance in order to provide realistic recovery goals. In this study we used N-mixture and distance models with data acquired from side-scan sonar surveys to estimate abundance of sturgeon in six major sturgeon rivers in North Carolina and South Carolina. Estimated abundances of sturgeon greater than 1 m TL in the Carolina distinct population segment (DPS) were 2,031 using the count model and 1,912 via the distance model. The Pee Dee River had the highest overall abundance of any river at 1,944 (count model) or 1,823 (distance model). These estimates do not account for sturgeon less than 1 m TL or occurring in riverine reaches not surveyed or in marine waters. Comparing the two models, the N-mixture model produced similar estimates using less data than the distance model with only a slight reduction of estimated precision.
Bed texture mapping in large rivers using recreational-grade sidescan sonar
Hamill, Daniel; Wheaton, Joseph M.; Buscombe, Daniel D.; Grams, Paul E.; Melis, Theodore S.
2017-01-01
The size-distribution and spatial organization of bed sediment, or bed ‘texture’, is a fundamental attribute of natural channels and is one important component of the physical habitat of aquatic ecosystems. ‘Recreational-grade’ sidescan sonar systems now offer the possibility of imaging, and subsequently quantifying bed texture at high resolution with minimal cost, or logistical effort. We are investigating the possibility of using sidescan sonar sensors on commercially available ‘fishfinders’ for within-channel bed-sediment characterization of mixed sand-gravel riverbeds in a debris-fan dominated canyon river. We analyzed repeat substrate mapping of data collected before and after the November 2014 High Flow Experiment on the Colorado River in lower Marble Canyon, Arizona. The mapping analysis resulted in sufficient spatial coverage (e.g. reach) and resolutions (e.g. centrimetric) to inform studies of the effects of changing bed substrates on salmonid spawning on large rivers. From this preliminary study, we argue that the approach could become a tractable and cost-effective tool for aquatic scientists to rapidly obtain bed texture maps without specialized knowledge of hydroacoustics. Bed texture maps can be used as a physical input for models relating ecosystem responses to hydrologic management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rautman, Christopher Arthur; Lord, Anna Snider
2007-09-01
Downhole sonar surveys from the four active U.S. Strategic Petroleum Reserve sites have been modeled and used to generate a four-volume sonar atlas, showing the three-dimensional geometry of each cavern. This volume 4 focuses on the West Hackberry SPR site, located in southwestern Louisiana. Volumes 1, 2, and 3, respectively, present images for the Bayou Choctaw SPR site, Louisiana, the Big Hill SPR site, Texas, and the Bryan Mound SPR site, Texas. The atlas uses a consistent presentation format throughout. The basic geometric measurements provided by the down-cavern surveys have also been used to generate a number of geometric attributes,more » the values of which have been mapped onto the geometric form of each cavern using a color-shading scheme. The intent of the various geometrical attributes is to highlight deviations of the cavern shape from the idealized cylindrical form of a carefully leached underground storage cavern in salt. The atlas format does not allow interpretation of such geometric deviations and anomalies. However, significant geometric anomalies, not directly related to the leaching history of the cavern, may provide insight into the internal structure of the relevant salt dome.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rautman, Christopher Arthur; Lord, Anna Snider
2007-08-01
Downhole sonar surveys from the four active U.S. Strategic Petroleum Reserve sites have been modeled and used to generate a four-volume sonar atlas, showing the three-dimensional geometry of each cavern. This volume 2 focuses on the Big Hill SPR site, located in southeastern Texas. Volumes 1, 3, and 4, respectively, present images for the Bayou Choctaw SPR site, Louisiana, the Bryan Mound SPR site, Texas, and the West Hackberry SPR site, Louisiana. The atlas uses a consistent presentation format throughout. The basic geometric measurements provided by the down-cavern surveys have also been used to generate a number of geometric attributes,more » the values of which have been mapped onto the geometric form of each cavern using a color-shading scheme. The intent of the various geometrical attributes is to highlight deviations of the cavern shape from the idealized cylindrical form of a carefully leached underground storage cavern in salt. The atlas format does not allow interpretation of such geometric deviations and anomalies. However, significant geometric anomalies, not directly related to the leaching history of the cavern, may provide insight into the internal structure of the relevant salt dome.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rautman, Christopher Arthur; Lord, Anna Snider
2007-10-01
Downhole sonar surveys from the four active U.S. Strategic Petroleum Reserve sites have been modeled and used to generate a four-volume sonar atlas, showing the three-dimensional geometry of each cavern. This volume 1 focuses on the Bayou Choctaw SPR site, located in southern Louisiana. Volumes 2, 3, and 4, respectively, present images for the Big Hill SPR site, Texas, the Bryan Mound SPR site, Texas, and the West Hackberry SPR site, Louisiana. The atlas uses a consistent presentation format throughout. The basic geometric measurements provided by the down-cavern surveys have also been used to generate a number of geometric attributes,more » the values of which have been mapped onto the geometric form of each cavern using a color-shading scheme. The intent of the various geometrical attributes is to highlight deviations of the cavern shape from the idealized cylindrical form of a carefully leached underground storage cavern in salt. The atlas format does not allow interpretation of such geometric deviations and anomalies. However, significant geometric anomalies, not directly related to the leaching history of the cavern, may provide insight into the internal structure of the relevant salt dome.« less
Seafloor habitat mapping of the New York Bight incorporating sidescan sonar data
Lathrop, R.G.; Cole, M.; Senyk, N.; Butman, B.
2006-01-01
The efficacy of using sidescan sonar imagery, image classification algorithms and geographic information system (GIS) techniques to characterize the seafloor bottom of the New York Bight were assessed. The resulting seafloor bottom type map was compared with fish trawl survey data to determine whether there were any discernable habitat associations. An unsupervised classification with 20 spectral classes was produced using the sidescan sonar imagery, bathymetry and secondarily derived spatial heterogeneity to characterize homogenous regions within the study area. The spectral classes, geologic interpretations of the study region, bathymetry and a bottom landform index were used to produce a seafloor bottom type map of 9 different bottom types. Examination of sediment sample data by bottom type indicated that each bottom type class had a distinct composition of sediments. Analysis of adult summer flounder, Paralichthys dentatus, and adult silver hake, Merluccius bilinearis, presence/absence data from trawl surveys did not show evidence of strong associations between the species distributions and seafloor bottom type. However, the absence of strong habitat associations may be more attributable to the coarse scale and geographic uncertainty of the trawl sampling data than conclusive evidence that no habitat associations exist for these two species. ?? 2006 Elsevier Ltd. All rights reserved.
Modeling interface roughness scattering in a layered seabed for normal-incident chirp sonar signals.
Tang, Dajun; Hefner, Brian T
2012-04-01
Downward looking sonar, such as the chirp sonar, is widely used as a sediment survey tool in shallow water environments. Inversion of geo-acoustic parameters from such sonar data precedes the availability of forward models. An exact numerical model is developed to initiate the simulation of the acoustic field produced by such a sonar in the presence of multiple rough interfaces. The sediment layers are assumed to be fluid layers with non-intercepting rough interfaces.
2012-09-30
cavirostris) to MFA sonar signals. A secondary goal of conducting a killer whale playback that has not been preceded by a sonar playback (as in Tyack et al...2011) was also planned. OBJECTIVES This investigation set out to safely test responses of Ziphius to sonar signals and to determine the...exposure level required to elicit a response in a site where strandings have been associated with sonar exercises and where the whales seldom hear sonar
Kennedy, J L; Marston, T M; Lee, K; Lopes, J L; Lim, R
2014-01-01
A 22 m diameter circular rail, outfitted with a mobile sonar tower trolley, was designed, fabricated, instrumented with underwater acoustic transducers, and assembled on a 1.5 m thick sand layer at the bottom of a large freshwater pool to carry out sonar design and target scattering response studies. The mobile sonar tower translates along the rail via a drive motor controlled by customized LabVIEW software. The rail system is modular and assembly consists of separately deploying eight circular arc sections, measuring a nominal center radius of 11 m and 8.64 m arc length each, and having divers connect them together in the underwater environment. The system enables full scale measurements on targets of interest with 0.1° angular resolution over a complete 360° aperture, without disrupting target setup, and affording a level of control over target environment conditions and noise sources unachievable in standard field measurements. In recent use, the mobile cart carrying an instrumented sonar tower was translated along the rail in 720 equal position increments and acoustic backscatter data were acquired at each position. In addition, this system can accommodate both broadband monostatic and bistatic scattering measurements on targets of interest, allowing capture of target signature phenomena under diverse configurations to address current scientific and technical issues encountered in mine countermeasure and unexploded ordnance applications. In the work discussed here, the circular rail apparatus is used for acoustic backscatter testing, but this system also has the capacity to facilitate the acquisition of magnetic and optical sensor data from targets of interest. A brief description of the system design and operation will be presented along with preliminary processed results for data acquired from acoustic measurements conducted at the Naval Surface Warfare Center, Panama City Division Test Pond Facility. [Work Supported by the U.S. Office of Naval Research and The Strategic Environmental Research and Development Program.].
Morphodynamic Impacts of Hurricane Sandy on the Inner-shelf (Invited)
NASA Astrophysics Data System (ADS)
Trembanis, A. C.; Beaudoin, J. D.; DuVal, C.; Schmidt, V. E.; Mayer, L. A.
2013-12-01
Through the careful execution of precision high-resolution acoustic sonar surveys over the period of October 2012 through July 2013, we have obtained a unique set of high-resolution before and after storm measurements of seabed morphology and in situ hydrodynamic conditions (waves and currents) capturing the impact of the storm at an inner continental shelf field site known as the 'Redbird reef' (Raineault et al., 2013). Understanding the signature of this storm event is important for identifying the impacts of such events and for understanding the role that such events have in the transport of sediment and marine debris on the inner continental shelf. In order to understand and characterize the ripple dynamics and scour processes in an energetic, heterogeneous inner-shelf setting, a series of high-resolution geoacoustic surveys were conducted before and after Hurricane Sandy. Our overall goal is to improve our understanding of bedform dynamics and spatio-temporal length scales and defect densities through the application of a recently developed fingerprint algorithm technique (Skarke and Trembanis, 2011). Utilizing high-resolution swath sonar collected by an AUV and from surface vessel multibeam sonar, our study focuses both on bedforms in the vicinity of manmade seabed objects (e.g. shipwrecks and subway cars) and dynamic natural ripples on the inner-shelf in energetic coastal settings with application to critical military operations such as mine countermeasures. Seafloor mapping surveys were conducted both with a ship-mounted multibeam echosounder (200 kHz and 400 kHz) and an Autonomous Underwater Vehicle (AUV) configured with high-resolution side-scan sonar (900 and 1800 kHz) and a phase measuring bathymetric sonar (500 kHz). These geoacoustic surveys were further augmented with data collected by in situ instruments placed on the seabed that recorded measurements of waves and currents at the site before, during, and after the storm. Multibeam echosounder map of the Redbird reef site after Hurricane Sandy. Image resolution is 25 cm/pixel.
NASA Astrophysics Data System (ADS)
Kennedy, J. L.; Marston, T. M.; Lee, K.; Lopes, J. L.; Lim, R.
2014-01-01
A 22 m diameter circular rail, outfitted with a mobile sonar tower trolley, was designed, fabricated, instrumented with underwater acoustic transducers, and assembled on a 1.5 m thick sand layer at the bottom of a large freshwater pool to carry out sonar design and target scattering response studies. The mobile sonar tower translates along the rail via a drive motor controlled by customized LabVIEW software. The rail system is modular and assembly consists of separately deploying eight circular arc sections, measuring a nominal center radius of 11 m and 8.64 m arc length each, and having divers connect them together in the underwater environment. The system enables full scale measurements on targets of interest with 0.1° angular resolution over a complete 360° aperture, without disrupting target setup, and affording a level of control over target environment conditions and noise sources unachievable in standard field measurements. In recent use, the mobile cart carrying an instrumented sonar tower was translated along the rail in 720 equal position increments and acoustic backscatter data were acquired at each position. In addition, this system can accommodate both broadband monostatic and bistatic scattering measurements on targets of interest, allowing capture of target signature phenomena under diverse configurations to address current scientific and technical issues encountered in mine countermeasure and unexploded ordnance applications. In the work discussed here, the circular rail apparatus is used for acoustic backscatter testing, but this system also has the capacity to facilitate the acquisition of magnetic and optical sensor data from targets of interest. A brief description of the system design and operation will be presented along with preliminary processed results for data acquired from acoustic measurements conducted at the Naval Surface Warfare Center, Panama City Division Test Pond Facility. [Work Supported by the U.S. Office of Naval Research and The Strategic Environmental Research and Development Program.
Xu, Guangyu; Jackson, Darrell R; Bemis, Karen G
2017-03-01
The relative importance of suspended particles and turbulence as backscattering mechanisms within a hydrothermal plume located on the Endeavour Segment of the Juan de Fuca Ridge is determined by comparing acoustic backscatter measured by the Cabled Observatory Vent Imaging Sonar (COVIS) with model calculations based on in situ samples of particles suspended within the plume. Analysis of plume samples yields estimates of the mass concentration and size distribution of particles, which are used to quantify their contribution to acoustic backscatter. The result shows negligible effects of plume particles on acoustic backscatter within the initial 10-m rise of the plume. This suggests turbulence-induced temperature fluctuations are the dominant backscattering mechanism within lower levels of the plume. Furthermore, inversion of the observed acoustic backscatter for the standard deviation of temperature within the plume yields a reasonable match with the in situ temperature measurements made by a conductivity-temperature-depth instrument. This finding shows that turbulence-induced temperature fluctuations are the dominant backscattering mechanism and demonstrates the potential of using acoustic backscatter as a remote-sensing tool to measure the temperature variability within a hydrothermal plume.
Acoustic measurement method of the volume flux of a seafloor hydrothermal plume
NASA Astrophysics Data System (ADS)
Xu, G.; Jackson, D. R.; Bemis, K. G.; Rona, P. A.
2011-12-01
Measuring fluxes (volume, chemical, heat, etc.) of the deep sea hydrothermal vents has been a crucial but challenging task faced by the scientific community since the discovery of the vent systems. However, the great depths and complexities of the hydrothermal vents make traditional sampling methods laborious and almost daunting missions. Furthermore, the samples, in most cases both sparse in space and sporadic in time, are hardly enough to provide a result with moderate uncertainty. In September 2010, our Cabled Observatory Vent Imaging Sonar System (COVIS, http://vizlab.rutgers.edu/AcoustImag/covis.html) was connected to the Neptune Canada underwater ocean observatory network (http://www.neptunecanada.ca) at the Main Endeavour vent field on the Endeavour segment of the Juan de Fuca Ridge. During the experiment, the COVIS system produced 3D images of the buoyant plume discharged from the vent complex Grotto by measuring the back-scattering intensity of the acoustic signal. Building on the methodology developed in our previous work, the vertical flow velocity of the plume is estimated from the Doppler shift of the acoustic signal using geometric correction to compensate for the ambient horizontal currents. A Gaussian distribution curve is fitted to the horizontal back-scattering intensity profile to determine the back-scattering intensity at the boundary of the plume. Such a boundary value is used as the threshold in a window function for separating the plume from background signal. Finally, the volume flux is obtained by integrating the resulting 2D vertical velocity profile over the horizontal cross-section of the plume. In this presentation, we discuss preliminary results from the COVIS experiment. In addition, several alternative approaches are applied to determination of the accuracy of the estimated plume vertical velocity in the absence of direct measurements. First, the results from our previous experiment (conducted in 2000 at the same vent complex using a similar methodology but a different sonar system) provide references to the consistency of the methodology. Second, the vertical flow rate measurement made in 2007 at an adjacent vent complex (Dante) using a different acoustic method (acoustic scintillation) can serve as a first order estimation of the plume vertical velocity. Third, another first order estimation can be obtained by combining the plume bending angle with the horizontal current measured by a current meter array deployed to the north of the vent field. Finally, statistical techniques are used to quantify the errors due to the ambient noises, inherent uncertainties of the methodology, and the fluctuation of the plume structure.
2012-06-01
From RADAR and SONAR , rocket propulsion, and the atomic bomb in World War II to the high tech drones, satellite imagery, surgically precise weapons...control from the four connectors shown in Figure scanner, preamplifier , step motor, and the bottom scanner. The connectors also electrically ground
Measuring the Speed of Sound in Water
ERIC Educational Resources Information Center
Ward, Richard J.
2015-01-01
This paper begins with an early measurement of the speed of sound in water. A historical overview of the consequent development of SONAR and medical imaging is given. A method of measuring the speed suitable for demonstration to year 10 students is described in detail, and an explanation of its systematic error examined.
Multiple Frequency Parametric Sonar
2015-09-28
300003 1 MULTIPLE FREQUENCY PARAMETRIC SONAR STATEMENT OF GOVERNMENT INTEREST [0001] The invention described herein may be manufactured and...a method for increasing the bandwidth of a parametric sonar system by using multiple primary frequencies rather than only two primary frequencies...2) Description of Prior Art [0004] Parametric sonar generates narrow beams at low frequencies by projecting sound at two distinct primary
50 CFR 216.270 - Specified activity and specified geographical region.
Code of Federal Regulations, 2012 CFR
2012-10-01
... the following mid-frequency active sonar (MFAS) and high frequency active sonar (HFAS) sources, or...) (estimated amounts below): (i) AN/SQS-53 (hull-mounted active sonar)—up to 9885 hours over the course of 5 years (an average of 1977 hours per year) (ii) AN/SQS-56 (hull-mounted active sonar)—up to 2470 hours...
50 CFR 216.270 - Specified activity and specified geographical region.
Code of Federal Regulations, 2013 CFR
2013-10-01
... the following mid-frequency active sonar (MFAS) and high frequency active sonar (HFAS) sources, or...) (estimated amounts below): (i) AN/SQS-53 (hull-mounted active sonar)—up to 9885 hours over the course of 5 years (an average of 1977 hours per year) (ii) AN/SQS-56 (hull-mounted active sonar)—up to 2470 hours...
Highly Directive Array Aperture
2013-02-13
generally to sonar arrays with acoustic discontinuities, and, more particularly, to increasing the directivity gain of a sonar array aperture by...sought by sonar designers. [0005] The following patents and publication show various types of acoustic arrays with coatings and discontinuities that...discloses a sonar array uses multiple acoustically transparent layers. One layer is a linear array of acoustic sensors that is substantially
A Directional Dogbone Flextensional Sonar Transducer
2010-10-01
A Directional Dogbone Flextensional Sonar Transducer Stephen C. Butler Naval Undersea Warfare Center, Newport, RI 02841 Abstract: In order to...transmit energy in one direction, sonar flextensional transducers are combined into arrays of elements that are spaced a 1/4 wavelength apart. The...electroacoustic performance and compared with an experimental data. Keywords: Transducer, Flextensional, Sonar , Piezoelectric, Directional, Cardioid
Experimental Comparison of High Duty Cycle and Pulsed Active Sonars in a Littoral Environment
2014-09-30
A series of metrics (eg. number of detections, matched-filter gain, false alarm rates, track purity, track latency, etc.) will be used to quantify...for QA. These data were used to generate spectrograms, ambient noise and reverberation decay plots, and clutter images, all of which helped...Perhaps the most useful of these for QA were the clutter images which provided a rapid visual assessment to estimate SNR, identify at what range the
Structural Acoustic UXO Detection and Identification in Marine Environments
2016-05-01
BOSS Buried Object Scanning Sonar DVL Doppler Velocity Log EW East/West IMU Inertial Measurement Unit NRL Naval Research Laboratory NSWC-PCD... Inertial Measurement Unit (IMU) to time-delay and coherently sum matched-filtered phase histories from subsurface focal points over a large number of... Measurement Unit (IMU) systems. In our imaging algorithm, the 2D depth image of a target, i.e. one mapped over x and z or y and z, presents the
Global Multi-Resolution Topography (GMRT) Synthesis - Version 2.0
NASA Astrophysics Data System (ADS)
Ferrini, V.; Coplan, J.; Carbotte, S. M.; Ryan, W. B.; O'Hara, S.; Morton, J. J.
2010-12-01
The detailed morphology of the global ocean floor is poorly known, with most areas mapped only at low resolution using satellite-based measurements. Ship-based sonars provide data at resolution sufficient to quantify seafloor features related to the active processes of erosion, sediment flow, volcanism, and faulting. To date, these data have been collected in a small fraction of the global ocean (<10%). The Global Multi-Resolution Topography (GMRT) synthesis makes use of sonar data collected by scientists and institutions worldwide, merging them into a single continuously updated compilation of high-resolution seafloor topography. Several applications, including GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org), make use of the GMRT Synthesis and provide direct access to images and underlying gridded data. Source multibeam files included in the compilation can also accessed through custom functionality in GeoMapApp. The GMRT Synthesis began in 1992 as the Ridge Multibeam Synthesis. It was subsequently expanded to include bathymetry data from the Southern Ocean, and now includes data from throughout the global oceans. Our design strategy has been to make data available at the full native resolution of shipboard sonar systems, which historically has been ~100 m in the deep sea (Ryan et al., 2009). A new release of the GMRT Synthesis in Fall of 2010 includes several significant improvements over our initial strategy. In addition to increasing the number of cruises included in the compilation by over 25%, we have developed a new protocol for handling multibeam source data, which has improved the overall quality of the compilation. The new tileset also includes a discrete layer of sonar data in the public domain that are gridded to the full resolution of the sonar system, with data gridded 25 m in some areas. This discrete layer of sonar data has been provided to Google for integration into Google’s default ocean base map. NOAA coastal grids and numerous grids contributed by the international science community are also integrated into the GMRT Synthesis. Finally, terrestrial elevation data from NASA’s ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) global DEM, and the USGS National Elevation Dataset have been included in the synthesis, providing resolution of up to 10 m in some areas of the US.
Timing matters: sonar call groups facilitate target localization in bats.
Kothari, Ninad B; Wohlgemuth, Melville J; Hulgard, Katrine; Surlykke, Annemarie; Moss, Cynthia F
2014-01-01
To successfully negotiate a cluttered environment, an echolocating bat must control the timing of motor behaviors in response to dynamic sensory information. Here we detail the big brown bat's adaptive temporal control over sonar call production for tracking prey, moving predictably or unpredictably, under different experimental conditions. We studied the adaptive control of vocal-motor behaviors in free-flying big brown bats, Eptesicus fuscus, as they captured tethered and free-flying insects, in open and cluttered environments. We also studied adaptive sonar behavior in bats trained to track moving targets from a resting position. In each of these experiments, bats adjusted the features of their calls to separate target and clutter. Under many task conditions, flying bats produced prominent sonar sound groups identified as clusters of echolocation pulses with relatively stable intervals, surrounded by longer pulse intervals. In experiments where bats tracked approaching targets from a resting position, bats also produced sonar sound groups, and the prevalence of these sonar sound groups increased when motion of the target was unpredictable. We hypothesize that sonar sound groups produced during flight, and the sonar call doublets produced by a bat tracking a target from a resting position, help the animal resolve dynamic target location and represent the echo scene in greater detail. Collectively, our data reveal adaptive temporal control over sonar call production that allows the bat to negotiate a complex and dynamic environment.
Timing matters: sonar call groups facilitate target localization in bats
Kothari, Ninad B.; Wohlgemuth, Melville J.; Hulgard, Katrine; Surlykke, Annemarie; Moss, Cynthia F.
2014-01-01
To successfully negotiate a cluttered environment, an echolocating bat must control the timing of motor behaviors in response to dynamic sensory information. Here we detail the big brown bat's adaptive temporal control over sonar call production for tracking prey, moving predictably or unpredictably, under different experimental conditions. We studied the adaptive control of vocal-motor behaviors in free-flying big brown bats, Eptesicus fuscus, as they captured tethered and free-flying insects, in open and cluttered environments. We also studied adaptive sonar behavior in bats trained to track moving targets from a resting position. In each of these experiments, bats adjusted the features of their calls to separate target and clutter. Under many task conditions, flying bats produced prominent sonar sound groups identified as clusters of echolocation pulses with relatively stable intervals, surrounded by longer pulse intervals. In experiments where bats tracked approaching targets from a resting position, bats also produced sonar sound groups, and the prevalence of these sonar sound groups increased when motion of the target was unpredictable. We hypothesize that sonar sound groups produced during flight, and the sonar call doublets produced by a bat tracking a target from a resting position, help the animal resolve dynamic target location and represent the echo scene in greater detail. Collectively, our data reveal adaptive temporal control over sonar call production that allows the bat to negotiate a complex and dynamic environment. PMID:24860509
50 CFR 218.110 - Specified activity and specified geographical area.
Code of Federal Regulations, 2012 CFR
2012-10-01
... sonar (MFAS) and high frequency active sonar (HFAS) sources, or similar sources, for Navy training...-53 (hull-mounted active sonar)—up to 215 hours over the course of 5 years (an average of 43 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)—up to 325 hours over the course of 5 years (an...
50 CFR 218.100 - Specified activity and specified geographical area.
Code of Federal Regulations, 2013 CFR
2013-10-01
... active sonar (MFAS) and high frequency active sonar (HFAS) sources, or similar sources, for Navy training...-53 (hull-mounted active sonar)—up to 10865 hours over the course of 5 years (an average of 2173 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)-up to 705 hours over the course of 5 years (an...
50 CFR 216.170 - Specified activity and specified geographical region.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-frequency active sonar (MFAS) and high frequency active sonar (HFAS) sources for U.S. Navy anti-submarine warfare (ASW) training in the amounts indicated below (±10 percent): (i) AN/SQS-53 (hull-mounted sonar)—up...-mounted sonar)—up to 1915 hours over the course of 5 years (an average of 383 hours per year) (iii) AN/AQS...
50 CFR 218.100 - Specified activity and specified geographical area.
Code of Federal Regulations, 2014 CFR
2014-10-01
... active sonar (MFAS) and high frequency active sonar (HFAS) sources, or similar sources, for Navy training...-53 (hull-mounted active sonar)—up to 10865 hours over the course of 5 years (an average of 2173 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)-up to 705 hours over the course of 5 years (an...
50 CFR 218.110 - Specified activity and specified geographical area.
Code of Federal Regulations, 2014 CFR
2014-10-01
... sonar (MFAS) and high frequency active sonar (HFAS) sources, or similar sources, for Navy training...-53 (hull-mounted active sonar)—up to 215 hours over the course of 5 years (an average of 43 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)—up to 325 hours over the course of 5 years (an...
50 CFR 218.100 - Specified activity and specified geographical area.
Code of Federal Regulations, 2012 CFR
2012-10-01
... active sonar (MFAS) and high frequency active sonar (HFAS) sources, or similar sources, for Navy training...-53 (hull-mounted active sonar)—up to 10865 hours over the course of 5 years (an average of 2173 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)-up to 705 hours over the course of 5 years (an...
50 CFR 218.120 - Specified activity and geographical area.
Code of Federal Regulations, 2014 CFR
2014-10-01
... the following mid-frequency active sonar (MFAS) sources, high-frequency active sonar (HFAS) sources...-mounted active sonar)—up to 2,890 hours over the course of 5 years (an average of 578 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)—up to 260 hours over the course of 5 years (an average of 52...
50 CFR 218.100 - Specified activity and specified geographical area.
Code of Federal Regulations, 2011 CFR
2011-10-01
...: (1) The use of the following mid-frequency active sonar (MFAS) and high frequency active sonar (HFAS..., testing and evaluation (RDT&E): (i) AN/SQS-53 (hull-mounted active sonar)—up to 10865 hours over the course of 5 years (an average of 2173 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)-up to...
50 CFR 218.110 - Specified activity and specified geographical area.
Code of Federal Regulations, 2013 CFR
2013-10-01
... sonar (MFAS) and high frequency active sonar (HFAS) sources, or similar sources, for Navy training...-53 (hull-mounted active sonar)—up to 215 hours over the course of 5 years (an average of 43 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)—up to 325 hours over the course of 5 years (an...
50 CFR 218.110 - Specified activity and specified geographical area.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the following mid-frequency active sonar (MFAS) sources, high frequency active sonar (HFAS) sources... below: (i) AN/SQS-53 (hull-mounted active sonar)—up to 215 hours over the course of 5 years (an average of 43 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)—up to 325 hours over the course of...
50 CFR 218.120 - Specified activity and geographical area.
Code of Federal Regulations, 2013 CFR
2013-10-01
... the following mid-frequency active sonar (MFAS) sources, high-frequency active sonar (HFAS) sources...-mounted active sonar)—up to 2,890 hours over the course of 5 years (an average of 578 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)—up to 260 hours over the course of 5 years (an average of 52...
50 CFR 218.120 - Specified activity and geographical area.
Code of Federal Regulations, 2012 CFR
2012-10-01
... the following mid-frequency active sonar (MFAS) sources, high-frequency active sonar (HFAS) sources...-mounted active sonar)—up to 2,890 hours over the course of 5 years (an average of 578 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)—up to 260 hours over the course of 5 years (an average of 52...
50 CFR 218.100 - Specified activity and specified geographical area.
Code of Federal Regulations, 2010 CFR
2010-10-01
...: (1) The use of the following mid-frequency active sonar (MFAS) and high frequency active sonar (HFAS..., testing and evaluation (RDT&E): (i) AN/SQS-53 (hull-mounted active sonar)—up to 10865 hours over the course of 5 years (an average of 2173 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)-up to...
50 CFR 218.120 - Specified activity and geographical area.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the following mid-frequency active sonar (MFAS) sources, high-frequency active sonar (HFAS) sources...-mounted active sonar)—up to 2,890 hours over the course of 5 years (an average of 578 hours per year); (ii) AN/SQS-56 (hull-mounted active sonar)—up to 260 hours over the course of 5 years (an average of 52...
Sonar beam dynamics in leaf-nosed bats
Linnenschmidt, Meike; Wiegrebe, Lutz
2016-01-01
Ultrasonic emissions of bats are directional and delimit the echo-acoustic space. Directionality is quantified by the aperture of the sonar beam. Recent work has shown that bats often widen their sonar beam when approaching movable prey or sharpen their sonar beam when navigating through cluttered habitats. Here we report how nose-emitting bats, Phyllostomus discolor, adjust their sonar beam to object distance. First, we show that the height and width of the bats sonar beam, as imprinted on a parabolic 45 channel microphone array, varies even within each animal and this variation is unrelated to changes in call level or spectral content. Second, we show that these animals are able to systematically decrease height and width of their sonar beam while focusing on the approaching object. Thus it appears that sonar beam sharpening is a further, facultative means of reducing search volume, likely to be employed by stationary animals when the object position is close and unambiguous. As only half of our individuals sharpened their beam onto the approaching object we suggest that this strategy is facultative, under voluntary control, and that beam formation is likely mediated by muscular control of the acoustic aperture of the bats’ nose leaf. PMID:27384865
Sonar beam dynamics in leaf-nosed bats.
Linnenschmidt, Meike; Wiegrebe, Lutz
2016-07-07
Ultrasonic emissions of bats are directional and delimit the echo-acoustic space. Directionality is quantified by the aperture of the sonar beam. Recent work has shown that bats often widen their sonar beam when approaching movable prey or sharpen their sonar beam when navigating through cluttered habitats. Here we report how nose-emitting bats, Phyllostomus discolor, adjust their sonar beam to object distance. First, we show that the height and width of the bats sonar beam, as imprinted on a parabolic 45 channel microphone array, varies even within each animal and this variation is unrelated to changes in call level or spectral content. Second, we show that these animals are able to systematically decrease height and width of their sonar beam while focusing on the approaching object. Thus it appears that sonar beam sharpening is a further, facultative means of reducing search volume, likely to be employed by stationary animals when the object position is close and unambiguous. As only half of our individuals sharpened their beam onto the approaching object we suggest that this strategy is facultative, under voluntary control, and that beam formation is likely mediated by muscular control of the acoustic aperture of the bats' nose leaf.
Dolphin sonar detection and discrimination capabilities
NASA Astrophysics Data System (ADS)
Au, Whitlow W. L.
2004-05-01
Dolphins have a very sophisticated short range sonar that surpasses all technological sonar in its capabilities to perform complex target discrimination and recognition tasks. The system that the U.S. Navy has for detecting mines buried under ocean sediment is one that uses Atlantic bottlenose dolphins. However, close examination of the dolphin sonar system will reveal that the dolphin acoustic hardware is fairly ordinary and not very special. The transmitted signals have peak-to-peak amplitudes as high as 225-228 dB re 1 μPa which translates to an rms value of approximately 210-213 dB. The transmit beamwidth is fairly broad at about 10o in both the horizontal and vertical planes and the receiving beamwidth is slightly broader by several degrees. The auditory filters are not very narrow with Q values of about 8.4. Despite these fairly ordinary features of the acoustic system, these animals still demonstrate very unusual and astonishing capabilities. Some of the capabilities of the dolphin sonar system will be presented and the reasons for their keen sonar capabilities will be discussed. Important features of their sonar include the broadband clicklike signals used, adaptive sonar search capabilities and large dynamic range of its auditory system.
DeRuiter, Stacy L; Southall, Brandon L; Calambokidis, John; Zimmer, Walter M X; Sadykova, Dinara; Falcone, Erin A; Friedlaender, Ari S; Joseph, John E; Moretti, David; Schorr, Gregory S; Thomas, Len; Tyack, Peter L
2013-08-23
Most marine mammal- strandings coincident with naval sonar exercises have involved Cuvier's beaked whales (Ziphius cavirostris). We recorded animal movement and acoustic data on two tagged Ziphius and obtained the first direct measurements of behavioural responses of this species to mid-frequency active (MFA) sonar signals. Each recording included a 30-min playback (one 1.6-s simulated MFA sonar signal repeated every 25 s); one whale was also incidentally exposed to MFA sonar from distant naval exercises. Whales responded strongly to playbacks at low received levels (RLs; 89-127 dB re 1 µPa): after ceasing normal fluking and echolocation, they swam rapidly, silently away, extending both dive duration and subsequent non-foraging interval. Distant sonar exercises (78-106 dB re 1 µPa) did not elicit such responses, suggesting that context may moderate reactions. The observed responses to playback occurred at RLs well below current regulatory thresholds; equivalent responses to operational sonars could elevate stranding risk and reduce foraging efficiency.
Radar Sensing for Intelligent Vehicles in Urban Environments
Reina, Giulio; Johnson, David; Underwood, James
2015-01-01
Radar overcomes the shortcomings of laser, stereovision, and sonar because it can operate successfully in dusty, foggy, blizzard-blinding, and poorly lit scenarios. This paper presents a novel method for ground and obstacle segmentation based on radar sensing. The algorithm operates directly in the sensor frame, without the need for a separate synchronised navigation source, calibration parameters describing the location of the radar in the vehicle frame, or the geometric restrictions made in the previous main method in the field. Experimental results are presented in various urban scenarios to validate this approach, showing its potential applicability for advanced driving assistance systems and autonomous vehicle operations. PMID:26102493
Radar Sensing for Intelligent Vehicles in Urban Environments.
Reina, Giulio; Johnson, David; Underwood, James
2015-06-19
Radar overcomes the shortcomings of laser, stereovision, and sonar because it can operate successfully in dusty, foggy, blizzard-blinding, and poorly lit scenarios. This paper presents a novel method for ground and obstacle segmentation based on radar sensing. The algorithm operates directly in the sensor frame, without the need for a separate synchronised navigation source, calibration parameters describing the location of the radar in the vehicle frame, or the geometric restrictions made in the previous main method in the field. Experimental results are presented in various urban scenarios to validate this approach, showing its potential applicability for advanced driving assistance systems and autonomous vehicle operations.
Brain MR image segmentation using NAMS in pseudo-color.
Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong
2017-12-01
Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.
Code of Federal Regulations, 2012 CFR
2012-07-01
... will consist of six inert drill mines each 16 inches in diameter and 5 feet long and one concrete sonar target 48 inches in diameter and 48 inches high located within the designated area. The sonar target will... sonar. Neither variable depth sonar devices or mechanical minesweeping operations will be utilized in...
A man-made object detection for underwater TV
NASA Astrophysics Data System (ADS)
Cheng, Binbin; Wang, Wenwu; Chen, Yao
2018-03-01
It is a great challenging task to complete an automatic search of objects underwater. Usually the forward looking sonar is used to find the target, and then the initial identification of the target is completed by the side-scan sonar, and finally the confirmation of the target is accomplished by underwater TV. This paper presents an efficient method for automatic extraction of man-made sensitive targets in underwater TV. Firstly, the image of underwater TV is simplified with taking full advantage of the prior knowledge of the target and the background; then template matching technology is used for target detection; finally the target is confirmed by extracting parallel lines on the target contour. The algorithm is formulated for real-time execution on limited-memory commercial-of-the-shelf platforms and is capable of detection objects in underwater TV.
NASA Technical Reports Server (NTRS)
Stoller, Ray A.; Wedding, Donald K.; Friedman, Peter S.
1993-01-01
A development status evaluation is presented for gas plasma display technology, noting how tradeoffs among the parameters of size, resolution, speed, portability, color, and image quality can yield cost-effective solutions for medical imaging, CAD, teleconferencing, multimedia, and both civil and military applications. Attention is given to plasma-based large-area displays' suitability for radar, sonar, and IR, due to their lack of EM susceptibility. Both monochrome and color displays are available.
CoBOP: Electro-Optic Identification Laser Line Sean Sensors
1998-01-01
Electro - Optic Identification Sensors Project[1] is to develop and demonstrate high resolution underwater electro - optic (EO) imaging sensors, and associated image processing/analysis methods, for rapid visual identification of mines and mine-like contacts (MLCs). Identification of MLCs is a pressing Fleet need. During MCM operations, sonar contacts are classified as mine-like if they are sufficiently similar to signatures of mines. Each contact classified as mine-like must be identified as a mine or not a mine. During MCM operations in littoral areas,
50 CFR 216.187 - Applications for Letters of Authorization.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Array Sensor System Low Frequency Active (SURTASS LFA sonar) Sonar § 216.187 Applications for Letters of... scheduled to begin conducting SURTASS LFA sonar operations or the previous Letter of Authorization is...
50 CFR 216.187 - Applications for Letters of Authorization.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Array Sensor System Low Frequency Active (SURTASS LFA sonar) Sonar § 216.187 Applications for Letters of... scheduled to begin conducting SURTASS LFA sonar operations or the previous Letter of Authorization is...
Bats coordinate sonar and flight behavior as they forage in open and cluttered environments.
Falk, Benjamin; Jakobsen, Lasse; Surlykke, Annemarie; Moss, Cynthia F
2014-12-15
Echolocating bats use active sensing as they emit sounds and listen to the returning echoes to probe their environment for navigation, obstacle avoidance and pursuit of prey. The sensing behavior of bats includes the planning of 3D spatial trajectory paths, which are guided by echo information. In this study, we examined the relationship between active sonar sampling and flight motor output as bats changed environments from open space to an artificial forest in a laboratory flight room. Using high-speed video and audio recordings, we reconstructed and analyzed 3D flight trajectories, sonar beam aim and acoustic sonar emission patterns as the bats captured prey. We found that big brown bats adjusted their sonar call structure, temporal patterning and flight speed in response to environmental change. The sonar beam aim of the bats predicted the flight turn rate in both the open room and the forest. However, the relationship between sonar beam aim and turn rate changed in the forest during the final stage of prey pursuit, during which the bat made shallower turns. We found flight stereotypy developed over multiple days in the forest, but did not find evidence for a reduction in active sonar sampling with experience. The temporal patterning of sonar sound groups was related to path planning around obstacles in the forest. Together, these results contribute to our understanding of how bats coordinate echolocation and flight behavior to represent and navigate their environment. © 2014. Published by The Company of Biologists Ltd.
Bats coordinate sonar and flight behavior as they forage in open and cluttered environments
Falk, Benjamin; Jakobsen, Lasse; Surlykke, Annemarie; Moss, Cynthia F.
2014-01-01
Echolocating bats use active sensing as they emit sounds and listen to the returning echoes to probe their environment for navigation, obstacle avoidance and pursuit of prey. The sensing behavior of bats includes the planning of 3D spatial trajectory paths, which are guided by echo information. In this study, we examined the relationship between active sonar sampling and flight motor output as bats changed environments from open space to an artificial forest in a laboratory flight room. Using high-speed video and audio recordings, we reconstructed and analyzed 3D flight trajectories, sonar beam aim and acoustic sonar emission patterns as the bats captured prey. We found that big brown bats adjusted their sonar call structure, temporal patterning and flight speed in response to environmental change. The sonar beam aim of the bats predicted the flight turn rate in both the open room and the forest. However, the relationship between sonar beam aim and turn rate changed in the forest during the final stage of prey pursuit, during which the bat made shallower turns. We found flight stereotypy developed over multiple days in the forest, but did not find evidence for a reduction in active sonar sampling with experience. The temporal patterning of sonar sound groups was related to path planning around obstacles in the forest. Together, these results contribute to our understanding of how bats coordinate echolocation and flight behavior to represent and navigate their environment. PMID:25394632
Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan
2012-01-01
Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.
Delphinid behavioral responses to incidental mid-frequency active sonar.
Henderson, E Elizabeth; Smith, Michael H; Gassmann, Martin; Wiggins, Sean M; Douglas, Annie B; Hildebrand, John A
2014-10-01
Opportunistic observations of behavioral responses by delphinids to incidental mid-frequency active (MFA) sonar were recorded in the Southern California Bight from 2004 through 2008 using visual focal follows, static hydrophones, and autonomous recorders. Sound pressure levels were calculated between 2 and 8 kHz. Surface behavioral responses were observed in 26 groups from at least three species of 46 groups out of five species encountered during MFA sonar incidents. Responses included changes in behavioral state or direction of travel, changes in vocalization rates and call intensity, or a lack of vocalizations while MFA sonar occurred. However, 46% of focal groups not exposed to sonar also changed their behavior, and 43% of focal groups exposed to sonar did not change their behavior. Mean peak sound pressure levels when a behavioral response occurred were around 122 dB re: 1 μPa. Acoustic localizations of dolphin groups exhibiting a response gave insight into nighttime movement patterns and provided evidence that impacts of sonar may be mediated by behavioral state. The lack of response in some cases may indicate a tolerance of or habituation to MFA sonar by local populations; however, the responses that occur at lower received levels may point to some sensitization as well.
Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z
2014-01-01
Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar
2014-01-01
Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410
BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana
2006-01-01
Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.
Assessment of Marine Mammal Impact Zones for Use of Military Sonar in the Baltic Sea.
Andersson, Mathias H; Johansson, Torbjörn
2016-01-01
Military sonars are known to have caused cetaceans to strand. Navies in shallow seas use different frequencies and sonar pulses, commonly frequencies between 25 and 100 kHz, compared with most studied NATO sonar systems that have been evaluated for their environmental impact. These frequencies match the frequencies of best hearing in the harbor porpoises and seals resident in the Baltic Sea. This study uses published temporary and permanent threshold shifts, measured behavioral response thresholds, technical specifications of a sonar system, and environmental parameters affecting sound propagation common for the Baltic Sea to estimate the impact zones for harbor porpoises and seals.
Integration of orthophotographic and sidescan sonar imagery: an example from Lake Garda, Italy
Gentili, Giuseppe; Twichell, David C.; Schwab, Bill
1996-01-01
Digital orthophotos of Lake Garda basin area are available at the scale of up to 1:10,000 from a 1994 high altitude (average scale of 1:75,000) air photo coverage of Italy collected with an RC30 camera and Panatomic film. In October 1994 the lake bed was surveyed by USGS and CISIG personnel using a SIS 1000 Sea-Floor Mapping System. Subsystems of the SIS-1000 include high resolution sidescan sonar and sub-bottom profiler. The sidescan imagery was collected in ranges up to 1500m, while preserving a 50cm pixel resolution. The system was navigated using differential GPS. The extended operational range of the sidescan sonar permitted surveying the 370km lake area in 11 days. Data were compiled into a digital image with a pixel resolution of about 2m and stored as 12 gigabytes in exabyte 8mm tape and converted from WGS84 coordinate system to the European Datum (ED50) and integrated with bathymetric data digitized from maps.The digital bathymetric model was generated by interpolation using commercial software and was merged with the land elevation model to obtain a digital elevation model of the Lake Garda basin.The sidescan image data was also projected in the same coordinate system and seamed with the digital orthophoto of the land to produce a continuous image of the basin as if the water were removed. Some perspective scenes were generated by combining elevation and bathymetric data with basin and lake floor images. In deep water the lake's thermal structure created problems with the imagery indicating that winter or spring is best survey period. In shallow waters, ≤ 10 m, where data are missing, the bottom data gap can be filled with available images from the first few channels of the Daedalus built MIVIS, a 102 channel hyperspectral scanner with 20 channel bands of 0.020 μm width, operating in the visible part of the spectrum. By integrating orthophotos with sidescan imagery we can see how the basin morphology extends across the lake, the paths taken by the lake inlet along the lake bed and the areal distribution of sediments. An extensive exposure of debris aprons were noted on the western side of the lake. Various anthropogenic objects were recognized: pipelines, sites of waste disposal on the lake's bed, and relicts of Venitian and Austrian(?) boats.
Pfeiffer, William R.; Flocks, James G.; DeWitt, Nancy T.; Forde, Arnell S.; Kelso, Kyle; Thompson, Phillip R.; Wiese, Dana S.
2011-01-01
In March of 2010, the U.S. Geological Survey (USGS) conducted geophysical surveys offshore of Petit Bois Island, Mississippi, and Dauphin Island, Alabama (fig. 1). These efforts were part of the USGS Gulf of Mexico Science Coordination partnership with the U.S. Army Corps of Engineers (USACE) to assist the Mississippi Coastal Improvements Program (MsCIP) and the Northern Gulf of Mexico (NGOM) Ecosystem Change and Hazards Susceptibility Project by mapping the shallow geologic stratigraphic framework of the Mississippi Barrier Island Complex. These geophysical surveys will provide the data necessary for scientists to define, interpret, and provide baseline bathymetry and seafloor habitat for this area and to aid scientists in predicting future geomorphological changes of the islands with respect to climate change, storm impact, and sea-level rise. Furthermore, these data will provide information for barrier island restoration, particularly in Camille Cut, and protection for the historical Fort Massachusetts on Ship Island, Mississippi. For more information please refer to http://ngom.usgs.gov/gomsc/mscip/index.html. This report serves as an archive of the processed swath bathymetry and side scan sonar data (SSS). Data products herein include gridded and interpolated surfaces, seabed backscatter images, and ASCII x,y,z data products for both swath bathymetry and side scan sonar imagery. Additional files include trackline maps, navigation files, GIS files, Field Activity Collection System (FACS) logs, and formal FGDC metadata. Scanned images of the handwritten and digital FACS logs are also provided as PDF files. Refer to the Acronyms page for expansion of acronyms and abbreviations used in this report.
50 CFR 216.190 - Modifications to Letters of Authorization.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Array Sensor System Low Frequency Active (SURTASS LFA sonar) Sonar § 216.190 Modifications to Letters of... sonar system from one ship to another, is not considered a substantial modification. (b) If the National...
50 CFR 216.190 - Modifications to Letters of Authorization.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Array Sensor System Low Frequency Active (SURTASS LFA sonar) Sonar § 216.190 Modifications to Letters of... sonar system from one ship to another, is not considered a substantial modification. (b) If the National...
Waterway wide area tactical coverage and homing (WaterWATCH) program overview
NASA Astrophysics Data System (ADS)
Driggers, Gerald; Cleveland, Tammy; Araujo, Lisa; Spohr, Robert; Umansky, Mark
2008-04-01
The Congressional and Army sponsored WaterWATCH TM Program has developed and demonstrated a fully integrated shallow water port and facility monitoring system. It provides fully automated monitoring of domains above and below the surface of the water using primarily off-the-shelf sensors and software. The system is modular, open architecture and IP based, and elements can be mixed and matched to adapt to specific applications. The sensors integrated into the WaterWATCH TM system include cameras, radar, passive and active sonar, and various motion detectors. The sensors were chosen based on extensive requirements analyses and tradeoffs. Descriptions of the system and individual sensors are provided, along with data from modular and system level testing. Camera test results address capabilities and limitations associated with using "smart" image analysis software with stressing environmental issues such as bugs, darkness, rain and snow. Radar issues addressed include achieving range and resolution requirements. The passive sonar capability to provide near 100% true positives with zero false positives is demonstrated. Testing results are also presented to show that inexpensive active sonar can be effective against divers with or without SCUBA gear and that false alarms due to fish can be minimized. A simple operator interface has also been demonstrated.
Mapping nuclear craters on Enewetak Atoll, Marshall Islands
Hampson, John C., Jr.
1986-01-01
In 1984, the U.S. Geological Survey conducted a detailed geologic analysis of two nuclear test craters at Enewetak Atoll, Marshall Islands, on behalf of the Defense Nuclear Agency. A multidisciplinary task force mapped the morphology, surface character, and subsurface structure of two craters, OAK and KOA. The field mapping techniques include echo sounding, sidescan sonar imaging, single-channel and multichannel seismic reflection profiling, a seismic refraction survey, and scuba and submersible operations. All operations had to be navigated precisely and correlatable with subsequent drilling and sampling operations. Mapping with a high degree of precision at scales as large as 1:1500 required corrections that often are not considered in marine mapping. Corrections were applied to the bathymetric data for location of the echo- sounding transducer relative to the navigation transponder on the ship and for transducer depth, speed of sound, and tidal variations. Sidescan sonar, single-channel seismic reflection, and scuba and submersible data were correlated in depth and map position with the bathymetric data to provide a precise, internally consistent data set. The multichannel and refraction surveys were conducted independently but compared well with bathymetry. Examples drawn from processing the bathymetric, sidescan sonar, and single- channel reflection data help illustrate problems and procedures in precision mapping.
Technical report on semiautomatic segmentation using the Adobe Photoshop.
Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae; Lee, Yong Sook; Har, Dong-Hwan
2005-12-01
The purpose of this research is to enable users to semiautomatically segment the anatomical structures in magnetic resonance images (MRIs), computerized tomographs (CTs), and other medical images on a personal computer. The segmented images are used for making 3D images, which are helpful to medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was scanned to make 557 MRIs. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL and manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a similar manner, 13 anatomical structures in 8,590 anatomical images were segmented. Proper segmentation was verified by making 3D images from the segmented images. Semiautomatic segmentation using Adobe Photoshop is expected to be widely used for segmentation of anatomical structures in various medical images.
BatSLAM: Simultaneous localization and mapping using biomimetic sonar.
Steckel, Jan; Peremans, Herbert
2013-01-01
We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building.
BatSLAM: Simultaneous Localization and Mapping Using Biomimetic Sonar
Steckel, Jan; Peremans, Herbert
2013-01-01
We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building. PMID:23365647
3S(expn 2): Behavioral Response Studies of Cetaceans to Navy Sonar Signals in Norwegian Waters
2013-09-30
orca), long-finned pilot (Globicephala melas ), and sperm whales (Physeter macrocephalus) to naval sonar. Aquatic Mammals 38: 362-401. 9...of sonar signals by long-finned pilot whales (Globicephala melas ). Marine Mammal sci Aoki K, Sakai M, Miller PJO, Visser F, Sato K (2013) Body...Orcinus orca), long-finned pilot (Globicephala melas ), and sperm whales (Physeter macrocephalus) to naval sonar. Aquatic Mammals 38: 362-401
Real-Time 3D Sonar Modeling And Visualization
1998-06-01
looking back towards Manta sonar beam, Manta plus sonar from 1000m off track. 185 NUWC sponsor Erik Chaum Principal investigator Don Brutzman...USN Sonar Officer LT Kevin Byrne USN Intelligence Officer CPT Russell Storms USA Erik Chaum works in NUWC Code 22. He supervised the design and...McGhee, Bob, "The Phoenix Autonomous Underwater Vehicle," chapter 13, AI-BasedMobile Robots, editors David Kortenkamp, Pete Bonasso and Robin Murphy
Sonar Test and Test Instrumentation Support.
1979-03-29
AD-AlSO 055 TEXAS UNIV AT AUSTIN APPLIED RESEARCH LABS F/6 17/1 SONAR TEST AND TEST INSTRUMENTATION SUPPORT (U) MAR 79 0 D BAKER N00140-76-C-64a7... SONAR TEST AND TEST INSTRUMENTATION SUPPORT quarterly progress report September - 30 November 197Pj 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(e) S...involves technical support with sonar testing, test instrumentation, and documentation. This report describes progress made under the tasks that are
2009-01-01
measure of a backscatter at a single narrowband frequency, and some AUVs carry single-frequency sidescan sonars (and this technology has been adapted...monostatic Doppler sonar module. Key personnel for this project include: Andone Lavery as the PI for this project and who has overall responsibility...for the successful development, testing, and calibration of the broadband system. Gene Terray, who developed the original sonar Doppler sonar boards
Sonar Transducer Reliability Improvement Program (STRIP) FY81.
1981-10-01
that must be considered when selecting a material for the design of a sonar transducer. In the past decade, plastics have decreased in cost and...required in a sonar transducer system. A recent example of this type of failure has been with a neoprene .tfer formulation which was designed to meet...subject of the first design specification for transducer elastomers. Previous work on this material under the aegis of the Sonar Transduction
Whales and Sonar: Environmental Exemptions for the Navy’s Mid-Frequency Active Sonar Training
2008-11-14
Balaenoptera musculus E Finback whale Balaenoptera physalus E Humpback whale Megaptera novaeangliae E Killer Southern whale Resident DPS Orcinus orca...Salmo) mykiss T Steelhead south central CA coast Oncorhynchus (=Salmo) mykiss E Steelhead southern CA coast Oncorhynchus (=Salmo) mykiss E Blue whale ...Order Code RL34403 Whales and Sonar: Environmental Exemptions for the Navy’s Mid-Frequency Active Sonar Training Updated November 14, 2008 Kristina
Review methods for image segmentation from computed tomography images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik
Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affectmore » the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.« less
Segmentation Fusion Techniques with Application to Plenoptic Images: A Survey.
NASA Astrophysics Data System (ADS)
Evin, D.; Hadad, A.; Solano, A.; Drozdowicz, B.
2016-04-01
The segmentation of anatomical and pathological structures plays a key role in the characterization of clinically relevant evidence from digital images. Recently, plenoptic imaging has emerged as a new promise to enrich the diagnostic potential of conventional photography. Since the plenoptic images comprises a set of slightly different versions of the target scene, we propose to make use of those images to improve the segmentation quality in relation to the scenario of a single image segmentation. The problem of finding a segmentation solution from multiple images of a single scene, is called segmentation fusion. This paper reviews the issue of segmentation fusion in order to find solutions that can be applied to plenoptic images, particularly images from the ophthalmological domain.
Code of Federal Regulations, 2014 CFR
2014-10-01
... aircraft conducting high-frequency or non-hull-mounted mid-frequency active sonar activities associated... or aircraft conducting high-frequency active sonar activities associated with anti-submarine warfare...). (2) High-frequency and non-hull mounted mid-frequency active sonar (except helicopter dipping). (3...
2012-09-30
Reproductive Potential, Immune Function, and Energetic Fitness of Bottlenose Dolphins Exposed to Sounds Consistent with Naval Sonars Dana L. Wetzel...biomarkers to examine whether significant sublethal responses to sonar-type sounds occur in bottlenose dolphins exposed to such sounds. The...investigate samples collected from trained dolphins before exposure to simulated mid-frequency sonar signals, immediately after exposure, and one week post
An interactive medical image segmentation framework using iterative refinement.
Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay
2017-04-01
Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-03
... present in the area to sound from various active tactical sonar sources or to pressure from underwater... utilizing mid- and high frequency active sonar sources and explosive detonations. These sonar and explosive...
Segmentation of stereo terrain images
NASA Astrophysics Data System (ADS)
George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.
2000-06-01
We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.
Dynamic 3d Modeling of a Canal-Tunnel Using Photogrammetric and Bathymetric Data
NASA Astrophysics Data System (ADS)
Moisan, E.; Heinkele, C.; Charbonnier, P.; Foucher, P.; Grussenmeyer, P.; Guillemin, S.; Koehl, M.
2017-02-01
This contribution introduces an original method for dynamically surveying the vault and underwater parts of a canal-tunnel for 3D modeling. The recording system, embedded on a barge, is composed of cameras that provide images of the above-water part of the tunnel, and a sonar that acquires underwater 3D profiles. In this contribution we propose to fully exploit the capacities of photogrammetry to deal with the issue of geo-referencing data in the absence of global positioning system (GPS) data. More specifically, we use it both for reconstructing the vault and side walls of the tunnel in 3D and for estimating the trajectory of the boat, which is necessary to rearrange sonar profiles to form the 3D model of the canal. We report on a first experimentation carried out inside a canal-tunnel and show promising preliminary results that illustrate the potentialities of the proposed approach.
Gardner, J.V.; Mayer, L.A.; Hughes, Clarke J.E.; Kleiner, A.
1998-01-01
The 1990s have seen rapid advances in seafloor mapping technology. Multibeam sonars are now capable of mapping a wide range of water depths with beams as narrow as 1??, and provide up to a 150?? swath. When these multibeam sonars are coupled with an extremely accurate vehicle motion sensor and very precise navigation, they are capable of producing unprecedented images of the seafloor. This technology was used in December 1997 to map the East and West Flower Gardens and Stetson Banks, Gulf of Mexico. The results from this survey provide the most accurate maps of these areas yet produced and reveal features at submeter resolution never mapped in these areas before. The digital data provide a database that should become the fundamental base maps for all subsequent work in this recently established National Marine Sanctuary.
Identification of uncommon objects in containers
Bremer, Peer-Timo; Kim, Hyojin; Thiagarajan, Jayaraman J.
2017-09-12
A system for identifying in an image an object that is commonly found in a collection of images and for identifying a portion of an image that represents an object based on a consensus analysis of segmentations of the image. The system collects images of containers that contain objects for generating a collection of common objects within the containers. To process the images, the system generates a segmentation of each image. The image analysis system may also generate multiple segmentations for each image by introducing variations in the selection of voxels to be merged into a segment. The system then generates clusters of the segments based on similarity among the segments. Each cluster represents a common object found in the containers. Once the clustering is complete, the system may be used to identify common objects in images of new containers based on similarity between segments of images and the clusters.
Monitoring Change Through Hierarchical Segmentation of Remotely Sensed Image Data
NASA Technical Reports Server (NTRS)
Tilton, James C.; Lawrence, William T.
2005-01-01
NASA's Goddard Space Flight Center has developed a fast and effective method for generating image segmentation hierarchies. These segmentation hierarchies organize image data in a manner that makes their information content more accessible for analysis. Image segmentation enables analysis through the examination of image regions rather than individual image pixels. In addition, the segmentation hierarchy provides additional analysis clues through the tracing of the behavior of image region characteristics at several levels of segmentation detail. The potential for extracting the information content from imagery data based on segmentation hierarchies has not been fully explored for the benefit of the Earth and space science communities. This paper explores the potential of exploiting these segmentation hierarchies for the analysis of multi-date data sets, and for the particular application of change monitoring.
2013-09-30
cavirostris) to MFA sonar signals. Secondary goals included conducting a killer whale playback that has not been preceded by a sonar playback (as in Tyack...et al. 2011) and collecting more baseline data on Ziphius. OBJECTIVES This investigation set out to safely test responses of Ziphius to sonar ...signals and to determine the exposure level required to elicit a response in a site where strandings have been associated with sonar exercises and
Using deep learning in image hyper spectral segmentation, classification, and detection
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Su, Zhenyu
2018-02-01
Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.
A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system
NASA Astrophysics Data System (ADS)
Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan
2018-01-01
This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.
Contour-Driven Atlas-Based Segmentation
Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina
2016-01-01
We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202
Metric Learning to Enhance Hyperspectral Image Segmentation
NASA Technical Reports Server (NTRS)
Thompson, David R.; Castano, Rebecca; Bue, Brian; Gilmore, Martha S.
2013-01-01
Unsupervised hyperspectral image segmentation can reveal spatial trends that show the physical structure of the scene to an analyst. They highlight borders and reveal areas of homogeneity and change. Segmentations are independently helpful for object recognition, and assist with automated production of symbolic maps. Additionally, a good segmentation can dramatically reduce the number of effective spectra in an image, enabling analyses that would otherwise be computationally prohibitive. Specifically, using an over-segmentation of the image instead of individual pixels can reduce noise and potentially improve the results of statistical post-analysis. In this innovation, a metric learning approach is presented to improve the performance of unsupervised hyperspectral image segmentation. The prototype demonstrations attempt a superpixel segmentation in which the image is conservatively over-segmented; that is, the single surface features may be split into multiple segments, but each individual segment, or superpixel, is ensured to have homogenous mineralogy.
Sonar-induced temporary hearing loss in dolphins
Mooney, T. Aran; Nachtigall, Paul E.; Vlachos, Stephanie
2009-01-01
There is increasing concern that human-produced ocean noise is adversely affecting marine mammals, as several recent cetacean mass strandings may have been caused by animals' interactions with naval ‘mid-frequency’ sonar. However, it has yet to be empirically demonstrated how sonar could induce these strandings or cause physiological effects. In controlled experimental studies, we show that mid-frequency sonar can induce temporary hearing loss in a bottlenose dolphin (Tursiops truncatus). Mild-behavioural alterations were also associated with the exposures. The auditory effects were induced only by repeated exposures to intense sonar pings with total sound exposure levels of 214 dB re: 1 μPa2 s. Data support an increasing energy model to predict temporary noise-induced hearing loss and indicate that odontocete noise exposure effects bear trends similar to terrestrial mammals. Thus, sonar can induce physiological and behavioural effects in at least one species of odontocete; however, exposures must be of prolonged, high sound exposures levels to generate these effects. PMID:19364712
Image Segmentation Using Minimum Spanning Tree
NASA Astrophysics Data System (ADS)
Dewi, M. P.; Armiati, A.; Alvini, S.
2018-04-01
This research aim to segmented the digital image. The process of segmentation is to separate the object from the background. So the main object can be processed for the other purposes. Along with the development of technology in digital image processing application, the segmentation process becomes increasingly necessary. The segmented image which is the result of the segmentation process should accurate due to the next process need the interpretation of the information on the image. This article discussed the application of minimum spanning tree on graph in segmentation process of digital image. This method is able to separate an object from the background and the image will change to be the binary images. In this case, the object that being the focus is set in white, while the background is black or otherwise.
NASA Astrophysics Data System (ADS)
Lin, Wei; Li, Xizhe; Yang, Zhengming; Lin, Lijun; Xiong, Shengchun; Wang, Zhiyuan; Wang, Xiangyang; Xiao, Qianhua
Based on the basic principle of the porosity method in image segmentation, considering the relationship between the porosity of the rocks and the fractal characteristics of the pore structures, a new improved image segmentation method was proposed, which uses the calculated porosity of the core images as a constraint to obtain the best threshold. The results of comparative analysis show that the porosity method can best segment images theoretically, but the actual segmentation effect is deviated from the real situation. Due to the existence of heterogeneity and isolated pores of cores, the porosity method that takes the experimental porosity of the whole core as the criterion cannot achieve the desired segmentation effect. On the contrary, the new improved method overcomes the shortcomings of the porosity method, and makes a more reasonable binary segmentation for the core grayscale images, which segments images based on the actual porosity of each image by calculated. Moreover, the image segmentation method based on the calculated porosity rather than the measured porosity also greatly saves manpower and material resources, especially for tight rocks.
Techniques on semiautomatic segmentation using the Adobe Photoshop
NASA Astrophysics Data System (ADS)
Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae
2005-04-01
The purpose of this research is to enable anybody to semiautomatically segment the anatomical structures in the MRIs, CTs, and other medical images on the personal computer. The segmented images are used for making three-dimensional images, which are helpful in medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was MR scanned to make 557 MRIs, which were transferred to a personal computer. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL; successively, manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a likewise manner, 11 anatomical structures in the 8,500 anatomcial images were segmented. Also, 12 brain and 10 heart anatomical structures in anatomical images were segmented. Proper segmentation was verified by making and examining the coronal, sagittal, and three-dimensional images from the segmented images. During semiautomatic segmentation on Adobe Photoshop, suitable algorithm could be used, the extent of automatization could be regulated, convenient user interface could be used, and software bugs rarely occurred. The techniques of semiautomatic segmentation using Adobe Photoshop are expected to be widely used for segmentation of the anatomical structures in various medical images.
NASA Technical Reports Server (NTRS)
Tilton, James C.
1988-01-01
Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.
Iterative Self-Dual Reconstruction on Radar Image Recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martins, Charles; Medeiros, Fatima; Ushizima, Daniela
2010-05-21
Imaging systems as ultrasound, sonar, laser and synthetic aperture radar (SAR) are subjected to speckle noise during image acquisition. Before analyzing these images, it is often necessary to remove the speckle noise using filters. We combine properties of two mathematical morphology filters with speckle statistics to propose a signal-dependent noise filter to multiplicative noise. We describe a multiscale scheme that preserves sharp edges while it smooths homogeneous areas, by combining local statistics with two mathematical morphology filters: the alternating sequential and the self-dual reconstruction algorithms. The experimental results show that the proposed approach is less sensitive to varying window sizesmore » when applied to simulated and real SAR images in comparison with standard filters.« less
Color segmentation in the HSI color space using the K-means algorithm
NASA Astrophysics Data System (ADS)
Weeks, Arthur R.; Hague, G. Eric
1997-04-01
Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue component plays in the segmentation of color images.
Upper crustal densities derived from sea floor gravity measurements: Northern Juan De Fuca Ridge
Holmes, Mark L.; Johnson, H. Paul
1993-01-01
A transect of sea floor gravity stations has been analyzed to determine upper crustal densities on the Endeavour segment of the northern Juan de Fuca Ridge. Data were obtained using ALVIN along a corridor perpendicular to the axis of spreading, over crustal ages from 0 to 800,000 years. Calculated elevation factors from the gravity data show an abrupt increase in density with age (distance) for the upper 200 m of crust. This density change is interpreted as a systematic reduction in bulk porosity of the upper crustal section, from 23% for the axial ridge to 10% for the off-axis flanking ridges. The porosity decrease is attributed to the collapse and filling of large-scale voids as the abyssal hills move out of the crustal formation zone. Forward modeling of a plausible density structure for the near-axis region agrees with the observed anomaly data only if the model includes narrow, along-strike, low-density regions adjacent to both inner and outer flanks of the abyssal hills. The required low density zones could be regions of systematic upper crustal fracturing and faulting that were mapped by submersible observers and side-scan sonar images, and whose presence was suggested by the distribution of heat flow data in the same area.
Development of a semi-automated combined PET and CT lung lesion segmentation framework
NASA Astrophysics Data System (ADS)
Rossi, Farli; Mokri, Siti Salasiah; Rahni, Ashrani Aizzuddin Abd.
2017-03-01
Segmentation is one of the most important steps in automated medical diagnosis applications, which affects the accuracy of the overall system. In this paper, we propose a semi-automated segmentation method for extracting lung lesions from thoracic PET/CT images by combining low level processing and active contour techniques. The lesions are first segmented in PET images which are first converted to standardised uptake values (SUVs). The segmented PET images then serve as an initial contour for subsequent active contour segmentation of corresponding CT images. To evaluate its accuracy, the Jaccard Index (JI) was used as a measure of the accuracy of the segmented lesion compared to alternative segmentations from the QIN lung CT segmentation challenge, which is possible by registering the whole body PET/CT images to the corresponding thoracic CT images. The results show that our proposed technique has acceptable accuracy in lung lesion segmentation with JI values of around 0.8, especially when considering the variability of the alternative segmentations.
50 CFR 218.236 - Requirements for reporting.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Low Frequency Active (SURTASS LFA) Sonar § 218.236 Requirements for reporting. (a) The Holder of the..., and location of each vessel during each mission; (2) Information on sonar transmissions during each..., this report must contain an unclassified analysis of new passive sonar technologies and an assessment...
50 CFR 218.236 - Requirements for reporting.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Low Frequency Active (SURTASS LFA) Sonar § 218.236 Requirements for reporting. (a) The Holder of the..., and location of each vessel during each mission; (2) Information on sonar transmissions during each..., this report must contain an unclassified analysis of new passive sonar technologies and an assessment...
50 CFR 218.236 - Requirements for reporting.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Low Frequency Active (SURTASS LFA) Sonar § 218.236 Requirements for reporting. (a) The Holder of the..., and location of each vessel during each mission; (2) Information on sonar transmissions during each..., this report must contain an unclassified analysis of new passive sonar technologies and an assessment...
Intelligent multi-spectral IR image segmentation
NASA Astrophysics Data System (ADS)
Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert
2017-05-01
This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.
Image Information Mining Utilizing Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai
2002-01-01
The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.
Aberration correction in wide-field fluorescence microscopy by segmented-pupil image interferometry.
Scrimgeour, Jan; Curtis, Jennifer E
2012-06-18
We present a new technique for the correction of optical aberrations in wide-field fluorescence microscopy. Segmented-Pupil Image Interferometry (SPII) uses a liquid crystal spatial light modulator placed in the microscope's pupil plane to split the wavefront originating from a fluorescent object into an array of individual beams. Distortion of the wavefront arising from either system or sample aberrations results in displacement of the images formed from the individual pupil segments. Analysis of image registration allows for the local tilt in the wavefront at each segment to be corrected with respect to a central reference. A second correction step optimizes the image intensity by adjusting the relative phase of each pupil segment through image interferometry. This ensures that constructive interference between all segments is achieved at the image plane. Improvements in image quality are observed when Segmented-Pupil Image Interferometry is applied to correct aberrations arising from the microscope's optical path.
Anderson, Jeffrey R; Barrett, Steven F
2009-01-01
Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This interactive approach gives the user the power to make optimal choices in the contrast enhancement parameters.
Inverting a dispersive scene's side-scanned image
NASA Technical Reports Server (NTRS)
Harger, R. O.
1983-01-01
Consideration is given to the problem of using a remotely sensed, side-scanned image of a time-variant scene, which changes according to a dispersion relation, to estimate the structure at a given moment. Additive thermal noise is neglected in the models considered in the formal treatment. It is shown that the dispersion relation is normalized by the scanning velocity, as is the group scanning velocity component. An inversion operation is defined for noise-free images generated by SAR. The method is extended to the inversion of noisy imagery, and a formulation is defined for spectral density estimation. Finally, the methods for a radar system are used for the case of sonar.
Image segmentation using fuzzy LVQ clustering networks
NASA Technical Reports Server (NTRS)
Tsao, Eric Chen-Kuo; Bezdek, James C.; Pal, Nikhil R.
1992-01-01
In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation.
50 CFR 216.170 - Specified activity and specified geographical region.
Code of Federal Regulations, 2013 CFR
2013-10-01
... incidental to the following activities: (1) The use of the following mid-frequency active sonar (MFAS) and high frequency active sonar (HFAS) sources, or similar sources, for Navy training activities (estimated amounts below): (1) The use of the following mid-frequency active sonar (MFAS) and high frequency active...
50 CFR 216.240 - Specified activity and specified geographical region.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Active Sonar Training (AFAST) § 216.240 Specified activity and specified geographical region. (a... Navy is only authorized if it occurs incidental to the use of the following mid-frequency active sonar (MFAS) sources, high frequency active sonar (HFAS) sources, explosive sonobuoys, or similar sources, for...
50 CFR 216.240 - Specified activity and specified geographical region.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Active Sonar Training (AFAST) § 216.240 Specified activity and specified geographical region. (a... Navy is only authorized if it occurs incidental to the use of the following mid-frequency active sonar (MFAS) sources, high frequency active sonar (HFAS) sources, explosive sonobuoys, or similar sources, for...
50 CFR 216.170 - Specified activity and specified geographical region.
Code of Federal Regulations, 2012 CFR
2012-10-01
... incidental to the following activities: (1) The use of the following mid-frequency active sonar (MFAS) and high frequency active sonar (HFAS) sources, or similar sources, for Navy training activities (estimated amounts below): (1) The use of the following mid-frequency active sonar (MFAS) and high frequency active...
2007-09-01
sonar transmission on fisheries and habitat in the U.S. Navy’s USWTR: Summary of stakeholder concerns and appropriate research areas by Dr...SUBTITLE: Title (Mix case letters) Investigation of the impact of sonar transmission on fisheries and habitat in the U.S. Navy’s USWTR: Summary of...table of specific public comments is included. 15. NUMBER OF PAGES 30 14. SUBJECT TERMS sonar, USWTR, Navy, fish, fishery , fisherman, behavior
Sonar Test and Test Instrumentation Support.
1976-11-10
AD-AI0 TEXAS UNIV AT AUSTIN APPLIED RESEARCH LARS F/6 17/1 SONAR TEST AND TEST INSTRUMENTATION SUPPDRT.1U) NoV 76 0 0 BAKER N00140-76-C-&687...UNCLASSIFIED_ NL i 0 00 THE UNIVERSITY OF TEXAS AT AUSTIN 10 November 1976 Copy No. 3 SONAR TEST AND TEST INSTRUMENTATION SUPPORT Quarterly Progress...8217 mi a - I TABLE OF CONTENTS A pag. I. INTRODUCTION 1 II. AN/FQM-IO(V) SONAR TEST SET FIELD SUPPORT 3 A. Introduction 3 B. Visit to NAVSHIPYD PEARL 3 C
Sivle, Lise Doksæter; Kvadsheim, Petter Helgevold; Ainslie, Michael
2016-01-01
Effects of noise on fish populations may be predicted by the population consequence of acoustic disturbance (PCAD) model. We have predicted the potential risk of population disturbance when the highest sound exposure level (SEL) at which adult herring do not respond to naval sonar (SEL(0)) is exceeded. When the population density is low (feeding), the risk is low even at high sonar source levels and long-duration exercises (>24 h). With densely packed populations (overwintering), a sonar exercise might expose the entire population to levels >SEL(0) within a 24-h exercise period. However, the disturbance will be short and the response threshold used here is highly conservative. It is therefore unlikely that naval sonar will significantly impact the herring population.
USDA-ARS?s Scientific Manuscript database
Segmentation is the first step in image analysis to subdivide an image into meaningful regions. The segmentation result directly affects the subsequent image analysis. The objective of the research was to develop an automatic adjustable algorithm for segmentation of color images, using linear suppor...
Multiple Hypotheses Image Segmentation and Classification With Application to Dietary Assessment
Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.
2016-01-01
We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier’s confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback. PMID:25561457
Multiple hypotheses image segmentation and classification with application to dietary assessment.
Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J; Delp, Edward J
2015-01-01
We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier's confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback.
Colour application on mammography image segmentation
NASA Astrophysics Data System (ADS)
Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.
2017-09-01
The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).
Scalable Joint Segmentation and Registration Framework for Infant Brain Images.
Dong, Pei; Wang, Li; Lin, Weili; Shen, Dinggang; Wu, Guorong
2017-03-15
The first year of life is the most dynamic and perhaps the most critical phase of postnatal brain development. The ability to accurately measure structure changes is critical in early brain development study, which highly relies on the performances of image segmentation and registration techniques. However, either infant image segmentation or registration, if deployed independently, encounters much more challenges than segmentation/registration of adult brains due to dynamic appearance change with rapid brain development. In fact, image segmentation and registration of infant images can assists each other to overcome the above challenges by using the growth trajectories (i.e., temporal correspondences) learned from a large set of training subjects with complete longitudinal data. Specifically, a one-year-old image with ground-truth tissue segmentation can be first set as the reference domain. Then, to register the infant image of a new subject at earlier age, we can estimate its tissue probability maps, i.e., with sparse patch-based multi-atlas label fusion technique, where only the training images at the respective age are considered as atlases since they have similar image appearance. Next, these probability maps can be fused as a good initialization to guide the level set segmentation. Thus, image registration between the new infant image and the reference image is free of difficulty of appearance changes, by establishing correspondences upon the reasonably segmented images. Importantly, the segmentation of new infant image can be further enhanced by propagating the much more reliable label fusion heuristics at the reference domain to the corresponding location of the new infant image via the learned growth trajectories, which brings image segmentation and registration to assist each other. It is worth noting that our joint segmentation and registration framework is also flexible to handle the registration of any two infant images even with significant age gap in the first year of life, by linking their joint segmentation and registration through the reference domain. Thus, our proposed joint segmentation and registration method is scalable to various registration tasks in early brain development studies. Promising segmentation and registration results have been achieved for infant brain MR images aged from 2-week-old to 1-year-old, indicating the applicability of our method in early brain development study.
Zheng, Qiang; Warner, Steven; Tasian, Gregory; Fan, Yong
2018-02-12
Automatic segmentation of kidneys in ultrasound (US) images remains a challenging task because of high speckle noise, low contrast, and large appearance variations of kidneys in US images. Because texture features may improve the US image segmentation performance, we propose a novel graph cuts method to segment kidney in US images by integrating image intensity information and texture feature maps. We develop a new graph cuts-based method to segment kidney US images by integrating original image intensity information and texture feature maps extracted using Gabor filters. To handle large appearance variation within kidney images and improve computational efficiency, we build a graph of image pixels close to kidney boundary instead of building a graph of the whole image. To make the kidney segmentation robust to weak boundaries, we adopt localized regional information to measure similarity between image pixels for computing edge weights to build the graph of image pixels. The localized graph is dynamically updated and the graph cuts-based segmentation iteratively progresses until convergence. Our method has been evaluated based on kidney US images of 85 subjects. The imaging data of 20 randomly selected subjects were used as training data to tune parameters of the image segmentation method, and the remaining data were used as testing data for validation. Experiment results demonstrated that the proposed method obtained promising segmentation results for bilateral kidneys (average Dice index = 0.9446, average mean distance = 2.2551, average specificity = 0.9971, average accuracy = 0.9919), better than other methods under comparison (P < .05, paired Wilcoxon rank sum tests). The proposed method achieved promising performance for segmenting kidneys in two-dimensional US images, better than segmentation methods built on any single channel of image information. This method will facilitate extraction of kidney characteristics that may predict important clinical outcomes such as progression of chronic kidney disease. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce
NASA Astrophysics Data System (ADS)
Chen, Xi; Zhou, Liqing
2015-12-01
With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.
A Unified Approach to Passive and Active Ocean Acoustic Waveguide Remote Sensing
2012-09-30
acoustic sensing reveals humpback whale behavior synchronous with herring spawning processes and sonar had no effect on humpback song ,” submitted to...source and receiver arrays to enable instantaneous continental-shelf scale imaging and continuous monitoring of fish and whale populations. Acoustic...Preliminary analysis shows that humpback whale behavior is synchronous with peak annual Atlantic herring spawning processes in the Gulf of
Contrast Analysis for Side-Looking Sonar
2013-09-30
bound for shadow depth that can be used to validate modeling tools such as SWAT (Shallow Water Acoustics Toolkit). • Adaptive Postprocessing: Tune image...0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send
Unmanned Underwater Vehicle (UUV) Information Study
2014-11-28
Maritime Unmanned System NATO North Atlantic Treaty Organization xi The use or disclosure of the information on this sheet is subject to the... Unmanned Aerial System UDA Underwater Domain Awareness UNISIPS Unified Sonar Image Processing System USV Unmanned Surface Vehicle UUV Unmanned Underwater...data distribution to ashore systems , such as the delay, its impact and the benefits to the overall MDA and required metadata for efficient search and
A combined learning algorithm for prostate segmentation on 3D CT images.
Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei
2017-11-01
Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate. © 2017 American Association of Physicists in Medicine.
Multivariate statistical model for 3D image segmentation with application to medical images.
John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O
2003-12-01
In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).
Nanthagopal, A Padma; Rajamony, R Sukanesh
2012-07-01
The proposed system provides new textural information for segmenting tumours, efficiently and accurately and with less computational time, from benign and malignant tumour images, especially in smaller dimensions of tumour regions of computed tomography (CT) images. Region-based segmentation of tumour from brain CT image data is an important but time-consuming task performed manually by medical experts. The objective of this work is to segment brain tumour from CT images using combined grey and texture features with new edge features and nonlinear support vector machine (SVM) classifier. The selected optimal features are used to model and train the nonlinear SVM classifier to segment the tumour from computed tomography images and the segmentation accuracies are evaluated for each slice of the tumour image. The method is applied on real data of 80 benign, malignant tumour images. The results are compared with the radiologist labelled ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and the overlap similarity measure dice metric. From the analysis and performance measures such as segmentation accuracy and dice metric, it is inferred that better segmentation accuracy and higher dice metric are achieved with the normalized cut segmentation method than with the fuzzy c-means clustering method.
Kvadsheim, Petter H.; Lam, Frans-Peter A.; von Benda-Beckmann, Alexander M.; Sivle, Lise D.; Visser, Fleur; Curé, Charlotte; Tyack, Peter L.; Miller, Patrick J. O.
2017-01-01
ABSTRACT Exposure to underwater sound can cause permanent hearing loss and other physiological effects in marine animals. To reduce this risk, naval sonars are sometimes gradually increased in intensity at the start of transmission (‘ramp-up’). Here, we conducted experiments in which tagged humpback whales were approached with a ship to test whether a sonar operation preceded by ramp-up reduced three risk indicators – maximum sound pressure level (SPLmax), cumulative sound exposure level (SELcum) and minimum source–whale range (Rmin) – compared with a sonar operation not preceded by ramp-up. Whales were subject to one no-sonar control session and either two successive ramp-up sessions (RampUp1, RampUp2) or a ramp-up session (RampUp1) and a full-power session (FullPower). Full-power sessions were conducted only twice; for other whales we used acoustic modelling that assumed transmission of the full-power sequence during their no-sonar control. Averaged over all whales, risk indicators in RampUp1 (n=11) differed significantly from those in FullPower (n=12) by −3.0 dB (SPLmax), −2.0 dB (SELcum) and +168 m (Rmin), but not significantly from those in RampUp2 (n=9). Only five whales in RampUp1, four whales in RampUp2 and none in FullPower or control sessions avoided the sound source. For RampUp1, we found statistically significant differences in risk indicators between whales that avoided the sonar and whales that did not: −4.7 dB (SPLmax), −3.4 dB (SELcum) and +291 m (Rmin). In contrast, for RampUp2, these differences were smaller and not significant. This study suggests that sonar ramp-up has a positive but limited mitigative effect for humpback whales overall, but that ramp-up can reduce the risk of harm more effectively in situations when animals are more responsive and likely to avoid the sonar, e.g. owing to novelty of the stimulus, when they are in the path of an approaching sonar ship. PMID:29141878
A Review on Segmentation of Positron Emission Tomography Images
Foster, Brent; Bagci, Ulas; Mansoor, Awais; Xu, Ziyue; Mollura, Daniel J.
2014-01-01
Positron Emission Tomography (PET), a non-invasive functional imaging method at the molecular level, images the distribution of biologically targeted radiotracers with high sensitivity. PET imaging provides detailed quantitative information about many diseases and is often used to evaluate inflammation, infection, and cancer by detecting emitted photons from a radiotracer localized to abnormal cells. In order to differentiate abnormal tissue from surrounding areas in PET images, image segmentation methods play a vital role; therefore, accurate image segmentation is often necessary for proper disease detection, diagnosis, treatment planning, and follow-ups. In this review paper, we present state-of-the-art PET image segmentation methods, as well as the recent advances in image segmentation techniques. In order to make this manuscript self-contained, we also briefly explain the fundamentals of PET imaging, the challenges of diagnostic PET image analysis, and the effects of these challenges on the segmentation results. PMID:24845019
A validation framework for brain tumor segmentation.
Archip, Neculai; Jolesz, Ferenc A; Warfield, Simon K
2007-10-01
We introduce a validation framework for the segmentation of brain tumors from magnetic resonance (MR) images. A novel unsupervised semiautomatic brain tumor segmentation algorithm is also presented. The proposed framework consists of 1) T1-weighted MR images of patients with brain tumors, 2) segmentation of brain tumors performed by four independent experts, 3) segmentation of brain tumors generated by a semiautomatic algorithm, and 4) a software tool that estimates the performance of segmentation algorithms. We demonstrate the validation of the novel segmentation algorithm within the proposed framework. We show its performance and compare it with existent segmentation. The image datasets and software are available at http://www.brain-tumor-repository.org/. We present an Internet resource that provides access to MR brain tumor image data and segmentation that can be openly used by the research community. Its purpose is to encourage the development and evaluation of segmentation methods by providing raw test and image data, human expert segmentation results, and methods for comparing segmentation results.
A Review of Algorithms for Segmentation of Optical Coherence Tomography from Retina
Kafieh, Raheleh; Rabbani, Hossein; Kermani, Saeed
2013-01-01
Optical coherence tomography (OCT) is a recently established imaging technique to describe different information about the internal structures of an object and to image various aspects of biological tissues. OCT image segmentation is mostly introduced on retinal OCT to localize the intra-retinal boundaries. Here, we review some of the important image segmentation methods for processing retinal OCT images. We may classify the OCT segmentation approaches into five distinct groups according to the image domain subjected to the segmentation algorithm. Current researches in OCT segmentation are mostly based on improving the accuracy and precision, and on reducing the required processing time. There is no doubt that current 3-D imaging modalities are now moving the research projects toward volume segmentation along with 3-D rendering and visualization. It is also important to develop robust methods capable of dealing with pathologic cases in OCT imaging. PMID:24083137
NASA Astrophysics Data System (ADS)
Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi
2010-03-01
In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.
Use of handheld sonar to locate a missing diver.
McGrane, Owen; Cronin, Aaron; Hile, David
2013-03-01
The purpose of this study was to investigate whether a handheld sonar device significantly reduces the mean time needed to locate a missing diver. This institutional review board approved, prospective, crossover study used a voluntary convenience sample of 10 scuba divers. Participants conducted both a standard and modified search to locate a simulated missing diver. The standard search utilized a conventional search pattern starting at the point where the missing diver (simulated) was last seen. The modified search used a sonar beacon to augment the search. For each search method, successful completion of the search was defined as locating the missing diver within 40 minutes. Twenty total dives were completed. Using a standard search pattern, the missing diver was found by only 1 diver (10%), taking 18 minutes and 45 seconds. In the sonar-assisted search group, the missing diver was found by all 10 participants (100%), taking an average of 2 minutes and 47 seconds (SD 1 minute, 20 seconds). Using the nonparametric related samples Wilcoxon signed rank test, actual times between the sonar group and the standard group were significant (P < .01). Using paired samples t tests, the sonar group's self-assessed confidence increased significantly after using the sonar (P < .001), whereas the standard group decreased in confidence (not statistically significant, P = .111). Handheld sonar significantly reduces the mean duration to locate a missing diver as well as increasing users' confidence in their ability to find a missing diver when compared with standard search techniques. Copyright © 2013 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models.
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S; Wu, Xiaowei; Müller, Rolf
2018-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S.; Wu, Xiaowei; Müller, Rolf
2017-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design. PMID:29749977
Afshar, Yaser; Sbalzarini, Ivo F.
2016-01-01
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144
Afshar, Yaser; Sbalzarini, Ivo F
2016-01-01
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.
NASA Astrophysics Data System (ADS)
Parson, L.; Murton, B.; Sauter, D.; Curewitz, D.; Okino, K.; German, C.; Leven, J.
2001-12-01
Deeptow sidescan sonar data (TOBI, 30kHz) acquired over more than 200 km of the Central Indian Ridge during RRS Charles Darwin cruise CD127 reveal an abundance of neovolcanic activity throughout both spreading segments and ridge non-transform discontinuities alike. Imagery of the previously unsurveyed northern section of the CIR immediately south of the Marie Celeste Fracture Zone confirms the presence of a shallow, magmatically inflated second order segment that is only recently rifted, with a rift floor surfaced throughout by virtually untectonised planar sheet flow units. First and second order segments exhibit a significant component of sheeted extrusives, ponded or in lake form, abutting or overstepped by hummocky and mounded pillow constructs. Non-transform discontinuities are commonly cut by fresh axial volcanic ridges oblique to both axial trend and offset. The depths of segment centers range from 2600m to more than 3700m, and segment forms include robust, hour-glass and rifted/starved end-members - but their overall extrusive pattern is strikingly invariant. Fracture Zone offsets of up to 65 kilometres are tectonically dominated, but their intersections with the axis are often mantled by multiple sheet flows rather than the relatively low proportions of sediment cover. The largest offsets are marked by outcrops of multiple, subparallel displacement surfaces, actively eroding transverse ridges, and ridge transform intersections with classic propagation/recession fabrics - each suggesting some instability in regional plate kinematics. While it is tempting to speculate that the Rodrigues hotspot appears to have a regional effect, enhancing magmatic delivery to the adjacent ridge and offset system, the apparent breadth of influence from what is assumed to be a rather feeble mantle anomaly is problematic.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-29
... DEPARTMENT OF DEFENSE Department of the Navy Record of Decision for Surveillance Towed Array Sensor System Low Frequency Active Sonar AGENCY: Department of the Navy, DoD. ACTION: Notice of decision... to employ up to four Surveillance Towed Array Sensor System Low Frequency Active (SURTASS LFA) sonar...
50 CFR 216.170 - Specified activity and specified geographical region.
Code of Federal Regulations, 2011 CFR
2011-10-01
... incidental to the following activities: (1) The use of the following mid-frequency active sonar (MFAS) and high frequency active sonar (HFAS) sources, or similar sources, for Navy training activities (estimated amounts below): (i) AN/SQS-53 (hull-mounted sonar)—up to 6420 hours over the course of 5 years (an average...
Mechanical Systems Development and Integration for a Second Generation Robot Submarine.
1980-05-01
for various scientific ii endeavors. As such, there will be times when the sub- marine must be disassembled for maintenance. This chapter is intended...STBD Side Scan Array 2 Port Side Scan Array 3 Comunications Sonar 4 Pinger 5 Bottom Finding Sonar 6 Collision Avoidance Sonar 7 Gel Cell Battery 8
Sonar equations for planetary exploration.
Ainslie, Michael A; Leighton, Timothy G
2016-08-01
The set of formulations commonly known as "the sonar equations" have for many decades been used to quantify the performance of sonar systems in terms of their ability to detect and localize objects submerged in seawater. The efficacy of the sonar equations, with individual terms evaluated in decibels, is well established in Earth's oceans. The sonar equations have been used in the past for missions to other planets and moons in the solar system, for which they are shown to be less suitable. While it would be preferable to undertake high-fidelity acoustical calculations to support planning, execution, and interpretation of acoustic data from planetary probes, to avoid possible errors for planned missions to such extraterrestrial bodies in future, doing so requires awareness of the pitfalls pointed out in this paper. There is a need to reexamine the assumptions, practices, and calibrations that work well for Earth to ensure that the sonar equations can be accurately applied in combination with the decibel to extraterrestrial scenarios. Examples are given for icy oceans such as exist on Europa and Ganymede, Titan's hydrocarbon lakes, and for the gaseous atmospheres of (for example) Jupiter and Venus.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Yamada, Kazuma; Kojima, Takuya; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi
2018-02-01
The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.
Kumar, Rajesh; Srivastava, Subodh; Srivastava, Rajeev
2017-07-01
For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other segmentation approaches used for cancer detection. Copyright © 2017 Elsevier B.V. All rights reserved.
Cellular image segmentation using n-agent cooperative game theory
NASA Astrophysics Data System (ADS)
Dimock, Ian B.; Wan, Justin W. L.
2016-03-01
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
Patient-specific semi-supervised learning for postoperative brain tumor segmentation.
Meier, Raphael; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio
2014-01-01
In contrast to preoperative brain tumor segmentation, the problem of postoperative brain tumor segmentation has been rarely approached so far. We present a fully-automatic segmentation method using multimodal magnetic resonance image data and patient-specific semi-supervised learning. The idea behind our semi-supervised approach is to effectively fuse information from both pre- and postoperative image data of the same patient to improve segmentation of the postoperative image. We pose image segmentation as a classification problem and solve it by adopting a semi-supervised decision forest. The method is evaluated on a cohort of 10 high-grade glioma patients, with segmentation performance and computation time comparable or superior to a state-of-the-art brain tumor segmentation method. Moreover, our results confirm that the inclusion of preoperative MR images lead to a better performance regarding postoperative brain tumor segmentation.
Moretti, David; Thomas, Len; Marques, Tiago; Harwood, John; Dilley, Ashley; Neales, Bert; Shaffer, Jessica; McCarthy, Elena; New, Leslie; Jarvis, Susan; Morrissey, Ronald
2014-01-01
There is increasing concern about the potential effects of noise pollution on marine life in the world's oceans. For marine mammals, anthropogenic sounds may cause behavioral disruption, and this can be quantified using a risk function that relates sound exposure to a measured behavioral response. Beaked whales are a taxon of deep diving whales that may be particularly susceptible to naval sonar as the species has been associated with sonar-related mass stranding events. Here we derive the first empirical risk function for Blainville's beaked whales (Mesoplodon densirostris) by combining in situ data from passive acoustic monitoring of animal vocalizations and navy sonar operations with precise ship tracks and sound field modeling. The hydrophone array at the Atlantic Undersea Test and Evaluation Center, Bahamas, was used to locate vocalizing groups of Blainville's beaked whales and identify sonar transmissions before, during, and after Mid-Frequency Active (MFA) sonar operations. Sonar transmission times and source levels were combined with ship tracks using a sound propagation model to estimate the received level (RL) at each hydrophone. A generalized additive model was fitted to data to model the presence or absence of the start of foraging dives in 30-minute periods as a function of the corresponding sonar RL at the hydrophone closest to the center of each group. This model was then used to construct a risk function that can be used to estimate the probability of a behavioral change (cessation of foraging) the individual members of a Blainville's beaked whale population might experience as a function of sonar RL. The function predicts a 0.5 probability of disturbance at a RL of 150 dBrms re µPa (CI: 144 to 155) This is 15dB lower than the level used historically by the US Navy in their risk assessments but 10 dB higher than the current 140 dB step-function.
Beaked whales respond to simulated and actual navy sonar.
Tyack, Peter L; Zimmer, Walter M X; Moretti, David; Southall, Brandon L; Claridge, Diane E; Durban, John W; Clark, Christopher W; D'Amico, Angela; DiMarzio, Nancy; Jarvis, Susan; McCarthy, Elena; Morrissey, Ronald; Ward, Jessica; Boyd, Ian L
2011-03-14
Beaked whales have mass stranded during some naval sonar exercises, but the cause is unknown. They are difficult to sight but can reliably be detected by listening for echolocation clicks produced during deep foraging dives. Listening for these clicks, we documented Blainville's beaked whales, Mesoplodon densirostris, in a naval underwater range where sonars are in regular use near Andros Island, Bahamas. An array of bottom-mounted hydrophones can detect beaked whales when they click anywhere within the range. We used two complementary methods to investigate behavioral responses of beaked whales to sonar: an opportunistic approach that monitored whale responses to multi-day naval exercises involving tactical mid-frequency sonars, and an experimental approach using playbacks of simulated sonar and control sounds to whales tagged with a device that records sound, movement, and orientation. Here we show that in both exposure conditions beaked whales stopped echolocating during deep foraging dives and moved away. During actual sonar exercises, beaked whales were primarily detected near the periphery of the range, on average 16 km away from the sonar transmissions. Once the exercise stopped, beaked whales gradually filled in the center of the range over 2-3 days. A satellite tagged whale moved outside the range during an exercise, returning over 2-3 days post-exercise. The experimental approach used tags to measure acoustic exposure and behavioral reactions of beaked whales to one controlled exposure each of simulated military sonar, killer whale calls, and band-limited noise. The beaked whales reacted to these three sound playbacks at sound pressure levels below 142 dB re 1 µPa by stopping echolocation followed by unusually long and slow ascents from their foraging dives. The combined results indicate similar disruption of foraging behavior and avoidance by beaked whales in the two different contexts, at exposures well below those used by regulators to define disturbance.
Moretti, David; Thomas, Len; Marques, Tiago; Harwood, John; Dilley, Ashley; Neales, Bert; Shaffer, Jessica; McCarthy, Elena; New, Leslie; Jarvis, Susan; Morrissey, Ronald
2014-01-01
There is increasing concern about the potential effects of noise pollution on marine life in the world’s oceans. For marine mammals, anthropogenic sounds may cause behavioral disruption, and this can be quantified using a risk function that relates sound exposure to a measured behavioral response. Beaked whales are a taxon of deep diving whales that may be particularly susceptible to naval sonar as the species has been associated with sonar-related mass stranding events. Here we derive the first empirical risk function for Blainville’s beaked whales (Mesoplodon densirostris) by combining in situ data from passive acoustic monitoring of animal vocalizations and navy sonar operations with precise ship tracks and sound field modeling. The hydrophone array at the Atlantic Undersea Test and Evaluation Center, Bahamas, was used to locate vocalizing groups of Blainville’s beaked whales and identify sonar transmissions before, during, and after Mid-Frequency Active (MFA) sonar operations. Sonar transmission times and source levels were combined with ship tracks using a sound propagation model to estimate the received level (RL) at each hydrophone. A generalized additive model was fitted to data to model the presence or absence of the start of foraging dives in 30-minute periods as a function of the corresponding sonar RL at the hydrophone closest to the center of each group. This model was then used to construct a risk function that can be used to estimate the probability of a behavioral change (cessation of foraging) the individual members of a Blainville’s beaked whale population might experience as a function of sonar RL. The function predicts a 0.5 probability of disturbance at a RL of 150dBrms re µPa (CI: 144 to 155) This is 15dB lower than the level used historically by the US Navy in their risk assessments but 10 dB higher than the current 140 dB step-function. PMID:24465477
The Dolphin Sonar: Excellent Capabilities In Spite of Some Mediocre Properties
NASA Astrophysics Data System (ADS)
Au, Whitlow W. L.
2004-11-01
Dolphin sonar research has been conducted for several decades and much has been learned about the capabilities of echolocating dolphins to detect, discriminate and recognize underwater targets. The results of these research projects suggest that dolphins possess the most sophisticated of all sonar for short ranges and shallow water where reverberation and clutter echoes are high. The critical feature of the dolphin sonar is the capability of discriminating and recognizing complex targets in a highly reverberant and noisy environment. The dolphin's detection threshold in reverberation occurs at a echo-to reverberation ratio of approximately 4 dB. Echolocating dolphins also have the capability to make fine discriminate of target properties such as wall thickness difference of water-filled cylinders and material differences in metallic plates. The high-resolution property of the animal's echolocation signals and the high dynamic range of its auditory system are important factors in their outstanding discrimination capabilities. In the wall thickness discrimination of cylinder experiment, time differences between echo highlights at small as 500-600 ns can be resolved by echolocating dolphins. Measurements of the targets used in the metallic plate composition experiment suggest that dolphins attended to echo components that were 20-30 dB below the maximum level for a specific target. It is interesting to realize that some of the properties of the dolphin sonar system are fairly mediocre, yet the total performance of the system is often outstanding. When compared to some technological sonar, the energy content of the dolphin sonar signal is not very high, the transmission and receiving beamwidths are fairly large, and the auditory filters are not very narrow. Yet the dolphin sonar has demonstrated excellent capabilities in spite the mediocre features of its "hardware." Reasons why dolphins can perform complex sonar task will be discussed in light of the "equipment" they possess.
Beaked Whales Respond to Simulated and Actual Navy Sonar
Tyack, Peter L.; Zimmer, Walter M. X.; Moretti, David; Southall, Brandon L.; Claridge, Diane E.; Durban, John W.; Clark, Christopher W.; D'Amico, Angela; DiMarzio, Nancy; Jarvis, Susan; McCarthy, Elena; Morrissey, Ronald; Ward, Jessica; Boyd, Ian L.
2011-01-01
Beaked whales have mass stranded during some naval sonar exercises, but the cause is unknown. They are difficult to sight but can reliably be detected by listening for echolocation clicks produced during deep foraging dives. Listening for these clicks, we documented Blainville's beaked whales, Mesoplodon densirostris, in a naval underwater range where sonars are in regular use near Andros Island, Bahamas. An array of bottom-mounted hydrophones can detect beaked whales when they click anywhere within the range. We used two complementary methods to investigate behavioral responses of beaked whales to sonar: an opportunistic approach that monitored whale responses to multi-day naval exercises involving tactical mid-frequency sonars, and an experimental approach using playbacks of simulated sonar and control sounds to whales tagged with a device that records sound, movement, and orientation. Here we show that in both exposure conditions beaked whales stopped echolocating during deep foraging dives and moved away. During actual sonar exercises, beaked whales were primarily detected near the periphery of the range, on average 16 km away from the sonar transmissions. Once the exercise stopped, beaked whales gradually filled in the center of the range over 2–3 days. A satellite tagged whale moved outside the range during an exercise, returning over 2–3 days post-exercise. The experimental approach used tags to measure acoustic exposure and behavioral reactions of beaked whales to one controlled exposure each of simulated military sonar, killer whale calls, and band-limited noise. The beaked whales reacted to these three sound playbacks at sound pressure levels below 142 dB re 1 µPa by stopping echolocation followed by unusually long and slow ascents from their foraging dives. The combined results indicate similar disruption of foraging behavior and avoidance by beaked whales in the two different contexts, at exposures well below those used by regulators to define disturbance. PMID:21423729
NASA Astrophysics Data System (ADS)
Hillman, Jess I. T.; Lamarche, Geoffroy; Pallentin, Arne; Pecher, Ingo A.; Gorman, Andrew R.; Schneider von Deimling, Jens
2018-06-01
Using automated supervised segmentation of multibeam backscatter data to delineate seafloor substrates is a relatively novel technique. Low-frequency multibeam echosounders (MBES), such as the 12-kHz EM120, present particular difficulties since the signal can penetrate several metres into the seafloor, depending on substrate type. We present a case study illustrating how a non-targeted dataset may be used to derive information from multibeam backscatter data regarding distribution of substrate types. The results allow us to assess limitations associated with low frequency MBES where sub-bottom layering is present, and test the accuracy of automated supervised segmentation performed using SonarScope® software. This is done through comparison of predicted and observed substrate from backscatter facies-derived classes and substrate data, reinforced using quantitative statistical analysis based on a confusion matrix. We use sediment samples, video transects and sub-bottom profiles acquired on the Chatham Rise, east of New Zealand. Inferences on the substrate types are made using the Generic Seafloor Acoustic Backscatter (GSAB) model, and the extents of the backscatter classes are delineated by automated supervised segmentation. Correlating substrate data to backscatter classes revealed that backscatter amplitude may correspond to lithologies up to 4 m below the seafloor. Our results emphasise several issues related to substrate characterisation using backscatter classification, primarily because the GSAB model does not only relate to grain size and roughness properties of substrate, but also accounts for other parameters that influence backscatter. Better understanding these limitations allows us to derive first-order interpretations of sediment properties from automated supervised segmentation.
NASA Astrophysics Data System (ADS)
Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong
2017-12-01
Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.
Efficient threshold for volumetric segmentation
NASA Astrophysics Data System (ADS)
Burdescu, Dumitru D.; Brezovan, Marius; Stanescu, Liana; Stoica Spahiu, Cosmin; Ebanca, Daniel
2015-07-01
Image segmentation plays a crucial role in effective understanding of digital images. However, the research on the existence of general purpose segmentation algorithm that suits for variety of applications is still very much active. Among the many approaches in performing image segmentation, graph based approach is gaining popularity primarily due to its ability in reflecting global image properties. Volumetric image segmentation can simply result an image partition composed by relevant regions, but the most fundamental challenge in segmentation algorithm is to precisely define the volumetric extent of some object, which may be represented by the union of multiple regions. The aim in this paper is to present a new method to detect visual objects from color volumetric images and efficient threshold. We present a unified framework for volumetric image segmentation and contour extraction that uses a virtual tree-hexagonal structure defined on the set of the image voxels. The advantage of using a virtual tree-hexagonal network superposed over the initial image voxels is that it reduces the execution time and the memory space used, without losing the initial resolution of the image.
Performance evaluation of image segmentation algorithms on microscopic image data.
Beneš, Miroslav; Zitová, Barbara
2015-01-01
In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
Liao, Xiaolei; Zhao, Juanjuan; Jiao, Cheng; Lei, Lei; Qiang, Yan; Cui, Qiang
2016-01-01
Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences. PMID:27532214
1963-10-04
Tolerances of Transducer Elements and Preamplifiers on Beam Formation and SSI Performance in the AN/SQS-26 Sonar Equipment (U)", TRACOR Document Number 63...SQS-26 SONAR EQUIPMENT (U) Prepared for GROLP - 4 DOWNGRADED AT% YEAR INTERVALS: l LJ.I The Bureau of Ships DECLASSIFIED A ER 12 YEARS. r . Code 688E t...ON.PERATION OF THEP ,,, Ts 4a nAinS-26 SONAR pul i~ ~ ~ ~ ~ ~ ~ ~~%,i forre o teSFXPora aaeet Prepared for Bull by: DSS11TIAVAILAIIL CODES The Bureau of Ships
A kind of color image segmentation algorithm based on super-pixel and PCNN
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.
Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation
NASA Astrophysics Data System (ADS)
Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin
2018-04-01
Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.
Merabet, Youssef El; Meurie, Cyril; Ruichek, Yassine; Sbihi, Abderrahmane; Touahni, Raja
2015-01-01
In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of simplifying the acquired image with an appropriate couple of invariant and gradient, optimized for the application, in order to limit illumination changes (shadows, brightness, etc.) affecting the images. The second step is composed of two main parallel treatments: on the one hand, the simplified image is segmented by watershed regions. Even if the first segmentation of this step provides good results in general, the image is often over-segmented. To alleviate this problem, an efficient region merging strategy adapted to the orthophotoplan particularities, with a 2D modeling of roof ridges technique, is applied. On the other hand, the simplified image is segmented by watershed lines. The third step consists of integrating both watershed segmentation strategies into a single cooperative segmentation scheme in order to achieve satisfactory segmentation results. Tests have been performed on orthophotoplans containing 100 roofs with varying complexity, and the results are evaluated with the VINETcriterion using ground-truth image segmentation. A comparison with five popular segmentation techniques of the literature demonstrates the effectiveness and the reliability of the proposed approach. Indeed, we obtain a good segmentation rate of 96% with the proposed method compared to 87.5% with statistical region merging (SRM), 84% with mean shift, 82% with color structure code (CSC), 80% with efficient graph-based segmentation algorithm (EGBIS) and 71% with JSEG. PMID:25648706
NASA Astrophysics Data System (ADS)
Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang
2012-01-01
The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.
Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.
Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C
2009-09-01
A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.
Echolocating bats rely on audiovocal feedback to adapt sonar signal design.
Luo, Jinhong; Moss, Cynthia F
2017-10-10
Many species of bat emit acoustic signals and use information carried by echoes reflecting from nearby objects to navigate and forage. It is widely documented that echolocating bats adjust the features of sonar calls in response to echo feedback; however, it remains unknown whether audiovocal feedback contributes to sonar call design. Audiovocal feedback refers to the monitoring of one's own vocalizations during call production and has been intensively studied in nonecholocating animals. Audiovocal feedback not only is a necessary component of vocal learning but also guides the control of the spectro-temporal structure of vocalizations. Here, we show that audiovocal feedback is directly involved in the echolocating bat's control of sonar call features. As big brown bats tracked targets from a stationary position, we played acoustic jamming signals, simulating calls of another bat, timed to selectively perturb audiovocal feedback or echo feedback. We found that the bats exhibited the largest call-frequency adjustments when the jamming signals occurred during vocal production. By contrast, bats did not show sonar call-frequency adjustments when the jamming signals coincided with the arrival of target echoes. Furthermore, bats rapidly adapted sonar call design in the first vocalization following the jamming signal, revealing a response latency in the range of 66 to 94 ms. Thus, bats, like songbirds and humans, rely on audiovocal feedback to structure sonar signal design.
Antunes, R; Kvadsheim, P H; Lam, F P A; Tyack, P L; Thomas, L; Wensveen, P J; Miller, P J O
2014-06-15
The potential effects of exposing marine mammals to military sonar is a current concern. Dose-response relationships are useful for predicting potential environmental impacts of specific operations. To reveal behavioral response thresholds of exposure to sonar, we conducted 18 exposure/control approaches to 6 long-finned pilot whales. Source level and proximity of sonar transmitting one of two frequency bands (1-2 kHz and 6-7 kHz) were increased during exposure sessions. The 2-dimensional movement tracks were analyzed using a changepoint method to identify the avoidance response thresholds which were used to estimate dose-response relationships. No support for an effect of sonar frequency or previous exposures on the probability of response was found. Estimated response thresholds at which 50% of population show avoidance (SPLmax=170 dB re 1 μPa, SELcum=173 dB re 1 μPa(2) s) were higher than previously found for other cetaceans. The US Navy currently uses a generic dose-response relationship to predict the responses of cetaceans to naval active sonar, which has been found to underestimate behavioural impacts on killer whales and beaked whales. The navy curve appears to match more closely our results with long-finned pilot whales, though it might underestimate the probability of avoidance for pilot-whales at long distances from sonar sources. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bats' avoidance of real and virtual objects: implications for the sonar coding of object size.
Goerlitz, Holger R; Genzel, Daria; Wiegrebe, Lutz
2012-01-01
Fast movement in complex environments requires the controlled evasion of obstacles. Sonar-based obstacle evasion involves analysing the acoustic features of object-echoes (e.g., echo amplitude) that correlate with this object's physical features (e.g., object size). Here, we investigated sonar-based obstacle evasion in bats emerging in groups from their day roost. Using video-recordings, we first show that the bats evaded a small real object (ultrasonic loudspeaker) despite the familiar flight situation. Secondly, we studied the sonar coding of object size by adding a larger virtual object. The virtual object echo was generated by real-time convolution of the bats' calls with the acoustic impulse response of a large spherical disc and played from the loudspeaker. Contrary to the real object, the virtual object did not elicit evasive flight, despite the spectro-temporal similarity of real and virtual object echoes. Yet, their spatial echo features differ: virtual object echoes lack the spread of angles of incidence from which the echoes of large objects arrive at a bat's ears (sonar aperture). We hypothesise that this mismatch of spectro-temporal and spatial echo features caused the lack of virtual object evasion and suggest that the sonar aperture of object echoscapes contributes to the sonar coding of object size. Copyright © 2011 Elsevier B.V. All rights reserved.
Towards Automatic Image Segmentation Using Optimised Region Growing Technique
NASA Astrophysics Data System (ADS)
Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi
Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.
A NDVI assisted remote sensing image adaptive scale segmentation method
NASA Astrophysics Data System (ADS)
Zhang, Hong; Shen, Jinxiang; Ma, Yanmei
2018-03-01
Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.
Analysis of image thresholding segmentation algorithms based on swarm intelligence
NASA Astrophysics Data System (ADS)
Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo
2013-03-01
Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.
MRI Segmentation of the Human Brain: Challenges, Methods, and Applications
Despotović, Ivana
2015-01-01
Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121
NASA Astrophysics Data System (ADS)
Park, Gilsoon; Hong, Jinwoo; Lee, Jong-Min
2018-03-01
In human brain, Corpus Callosum (CC) is the largest white matter structure, connecting between right and left hemispheres. Structural features such as shape and size of CC in midsagittal plane are of great significance for analyzing various neurological diseases, for example Alzheimer's disease, autism and epilepsy. For quantitative and qualitative studies of CC in brain MR images, robust segmentation of CC is important. In this paper, we present a novel method for CC segmentation. Our approach is based on deep neural networks and the prior information generated from multi-atlas images. Deep neural networks have recently shown good performance in various image processing field. Convolutional neural networks (CNN) have shown outstanding performance for classification and segmentation in medical image fields. We used convolutional neural networks for CC segmentation. Multi-atlas based segmentation model have been widely used in medical image segmentation because atlas has powerful information about the target structure we want to segment, consisting of MR images and corresponding manual segmentation of the target structure. We combined the prior information, such as location and intensity distribution of target structure (i.e. CC), made from multi-atlas images in CNN training process for more improving training. The CNN with prior information showed better segmentation performance than without.
Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method.
Han, Dongfeng; Bayouth, John; Song, Qi; Taurani, Aakant; Sonka, Milan; Buatti, John; Wu, Xiaodong
2011-01-01
Tumor segmentation in PET and CT images is notoriously challenging due to the low spatial resolution in PET and low contrast in CT images. In this paper, we have proposed a general framework to use both PET and CT images simultaneously for tumor segmentation. Our method utilizes the strength of each imaging modality: the superior contrast of PET and the superior spatial resolution of CT. We formulate this problem as a Markov Random Field (MRF) based segmentation of the image pair with a regularized term that penalizes the segmentation difference between PET and CT. Our method simulates the clinical practice of delineating tumor simultaneously using both PET and CT, and is able to concurrently segment tumor from both modalities, achieving globally optimal solutions in low-order polynomial time by a single maximum flow computation. The method was evaluated on clinically relevant tumor segmentation problems. The results showed that our method can effectively make use of both PET and CT image information, yielding segmentation accuracy of 0.85 in Dice similarity coefficient and the average median hausdorff distance (HD) of 6.4 mm, which is 10% (resp., 16%) improvement compared to the graph cuts method solely using the PET (resp., CT) images.
Method to acquire regions of fruit, branch and leaf from image of red apple in orchard
NASA Astrophysics Data System (ADS)
Lv, Jidong; Xu, Liming
2017-07-01
This work proposed a method to acquire regions of fruit, branch and leaf from red apple image in orchard. To acquire fruit image, R-G image was extracted from the RGB image for corrosive working, hole filling, subregion removal, expansive working and opening operation in order. Finally, fruit image was acquired by threshold segmentation. To acquire leaf image, fruit image was subtracted from RGB image before extracting 2G-R-B image. Then, leaf image was acquired by subregion removal and threshold segmentation. To acquire branch image, dynamic threshold segmentation was conducted in the R-G image. Then, the segmented image was added to fruit image to acquire adding fruit image which was subtracted from RGB image with leaf image. Finally, branch image was acquired by opening operation, subregion removal and threshold segmentation after extracting the R-G image from the subtracting image. Compared with previous methods, more complete image of fruit, leaf and branch can be acquired from red apple image with this method.
Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue
NASA Astrophysics Data System (ADS)
Sawyer, Travis W.; Rice, Photini F. S.; Sawyer, David M.; Koevary, Jennifer W.; Barton, Jennifer K.
2018-02-01
Ovarian cancer has the lowest survival rate among all gynecologic cancers due to predominantly late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depthresolved, high-resolution images of biological tissue in real time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must rst be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluated a set of algorithms to segment OCT images of mouse ovaries. We examined ve preprocessing techniques and six segmentation algorithms. While all pre-processing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32% +/- 1.2%. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 0.948 +/- 0.012 compared with manual segmentation (1.0 being identical). Nonetheless, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.
An Interactive Image Segmentation Method in Hand Gesture Recognition
Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-01-01
In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818
Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation
NASA Astrophysics Data System (ADS)
Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.
2010-02-01
Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.
Introduction to Sonar, Naval Education and Training Command. Revised Edition.
ERIC Educational Resources Information Center
Naval Education and Training Command, Pensacola, FL.
This Rate Training Manual (RTM) and Nonresident Career Course form a self-study package for those U.S. Navy personnel who are seeking advancement in the Sonar Technician Rating. Among the requirements of the rating are the abilities to obtain and interpret underwater data, operate and maintain upkeep of sonar equipment, and interpret target and…
Ceteacean Social Behavioral Response to Sonar
2011-09-30
behavior data of humpback whales and minke whales was recorded during 5 and 1 CEEs respectively (including tagging, baseline, sonar exposure and...during fieldwork efforts in 2012 and 2013. Figure 1. Example of humpback whale group behavior sampling...cetacean behavioral responses to sonar signals and other stimuli (tagging effort, killer whale playbacks) as well as baseline behavior, are studied
Observing Ocean Ecosystems with Sonar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzner, Shari; Maxwell, Adam R.; Ham, Kenneth D.
2016-12-01
We present a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) is built to connect to an instrumentation network, where it consumes a real-time stream of sonar data and archives tracking and biomass data.
ATR Performance Estimation Seed Program
2015-09-28
to produce simulated MCM sonar data and demonstrate the impact of system, environmental, and target scattering effects on ATR detection...settings and achieving better understanding the relative impact of the factors influencing ATR performance. sonar, mine countermeasures, MCM , automatic...simulated MCM sonar data and demonstrate the impact of system, environmental, and target scattering effects on ATR detection/classification performance. The
A spectral k-means approach to bright-field cell image segmentation.
Bradbury, Laura; Wan, Justin W L
2010-01-01
Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.
Distance-based over-segmentation for single-frame RGB-D images
NASA Astrophysics Data System (ADS)
Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao
2017-11-01
Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.
A deep learning model integrating FCNNs and CRFs for brain tumor segmentation.
Zhao, Xiaomei; Wu, Yihong; Song, Guidong; Li, Zhenye; Zhang, Yazhuo; Fan, Yong
2018-01-01
Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF-RNN) using image slices with parameters of FCNNs fixed; and 3) fine-tuning the FCNNs and the CRF-RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans. Copyright © 2017 Elsevier B.V. All rights reserved.
Implementation and testing of a Deep Water Correlation Velocity Sonar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickey, F.R.; Bookheimer, W.C.; Rhoades, K.W.
1983-05-01
The paper describes a new sonar designated the Magnavox MX 810 Deep Water Correlation Sonar which is under development by the General Electric Company and the Magnavox Advanced Products and Systems Company. The sonar measures ship's velocity relative to the bottom but instead of using the conventional doppler effect, it uses the correlation method described by Dickey and Edward in 1978. In this method, the narrow beams required for doppler are not needed and a low frequency that penetrates to the bottom in deep water is used. The sonar was designed with the constraint that it use a transducer thatmore » mounts through a single 12 inch gate valve. Most offshore geophysical surveys at present make use of an integrated navigation system with bottom referenced velocity input from a doppler sonar which, because of limitations on the sonar bottomtracking range, has difficulty in areas where the water depth is greater than about 500 meters. The MX 810 provides bottom tracking in regions of much greater water depth. It also may be applied as an aid in continuous positioning of a vessel over a fixed location. It also should prove useful as a more general navigation aid. The sonar is undergoing a series of tests using Magnavox's facilities for the purpose of verifying the performance and obtaining data to support and quantify planned improvements in both software and hardware. A prototype transducer of only 5 watts power output was used, but in spite of this low power, successful operation to depths of 1900 meters was obtained. Extrapolation to system parameters to be implemented in production models predicts operation to depths of 5000 meters.« less
A comparison of the role of beamwidth in biological and engineered sonar.
Todd, Bryan D; Müller, Rolf
2017-12-28
Sonar is an important sensory modality for engineers as well as in nature. In engineering, sonar is the dominating modality for underwater sensing. In nature, biosonar is likely to have been a central factor behind the unprecedented evolutionary success of bats, a highly diverse group that accounts for over 20% of all mammal species. However, it remains unclear to what extent engineered and biosonar follow similar design and operational principles. In the current work, the key sonar design characteristic of beamwidth is examined in technical and biosonar. To this end, beamwidth data has been obtained for 23 engineered sonar systems and from numerical beampattern predictions for 151 emission and reception elements (noseleaves and ears) representing bat biosonar. Beamwidth data from these sources is compared to the beamwidth of a planar ellipsoidal transducer as a reference. The results show that engineered and biological both obey the basic physical limit on beamwidth as a function of the ratio of aperture size and wavelength. However, beyond that, the beamwidth data revealed very different behaviors between the engineered and the biological sonar systems. Whereas the beamwidths of the technical sonar systems were very close to the planar transducer limit, the biological samples showed a very wide scatter away from this limit. This scatter was as large, if not wider, than what was seen in a small reference data set obtained with random aluminum cones. A possible interpretation of these differences in the variability could be that whereas sonar engineers try to minimize beamwidth subject to constraints on device size, the evolutionary optimization of bat biosonar beampatterns has been directed at other factors that have left beamwidth as a byproduct. Alternatively, the biosonar systems may require beamwidth values that are larger than the physical limit and differ between species and their sensory ecological niches.
SVM Pixel Classification on Colour Image Segmentation
NASA Astrophysics Data System (ADS)
Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.
2018-04-01
The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.
Integrated circuit layer image segmentation
NASA Astrophysics Data System (ADS)
Masalskis, Giedrius; Petrauskas, Romas
2010-09-01
In this paper we present IC layer image segmentation techniques which are specifically created for precise metal layer feature extraction. During our research we used many samples of real-life de-processed IC metal layer images which were obtained using optical light microscope. We have created sequence of various image processing filters which provides segmentation results of good enough precision for our application. Filter sequences were fine tuned to provide best possible results depending on properties of IC manufacturing process and imaging technology. Proposed IC image segmentation filter sequences were experimentally tested and compared with conventional direct segmentation algorithms.
Multi-scale image segmentation method with visual saliency constraints and its application
NASA Astrophysics Data System (ADS)
Chen, Yan; Yu, Jie; Sun, Kaimin
2018-03-01
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.
Medical image segmentation using 3D MRI data
NASA Astrophysics Data System (ADS)
Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.
2017-05-01
Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.
Coherent and Noncoherent Joint Processing of Sonar for Detection of Small Targets in Shallow Water.
Pan, Xiang; Jiang, Jingning; Li, Si; Ding, Zhenping; Pan, Chen; Gong, Xianyi
2018-04-10
A coherent-noncoherent joint processing framework is proposed for active sonar to combine diversity gain and beamforming gain for detection of a small target in shallow water environments. Sonar utilizes widely-spaced arrays to sense environments and illuminate a target of interest from multiple angles. Meanwhile, it exploits spatial diversity for time-reversal focusing to suppress reverberation, mainly strong bottom reverberation. For enhancement of robustness of time-reversal focusing, an adaptive iterative strategy is utilized in the processing framework. A probing signal is firstly transmitted and echoes of a likely target are utilized as steering vectors for the second transmission. With spatial diversity, target bearing and range are estimated using a broadband signal model. Numerical simulations show that the novel sonar outperforms the traditional phased-array sonar due to benefits of spatial diversity. The effectiveness of the proposed framework has been validated by localization of a small target in at-lake experiments.
Sonar sound groups and increased terminal buzz duration reflect task complexity in hunting bats.
Hulgard, Katrine; Ratcliffe, John M
2016-02-09
More difficult tasks are generally regarded as such because they demand greater attention. Echolocators provide rare insight into this relationship because biosonar signals can be monitored. Here we show that bats produce longer terminal buzzes and more sonar sound groups during their approach to prey under presumably more difficult conditions. Specifically, we found Daubenton's bats, Myotis daubentonii, produced longer buzzes when aerial-hawking versus water-trawling prey, but that bats taking revolving air- and water-borne prey produced more sonar sound groups than did the bats when taking stationary prey. Buzz duration and sonar sound groups have been suggested to be independent means by which bats attend to would-be targets and other objects of interest. We suggest that for attacking bats both should be considered as indicators of task difficulty and that the buzz is, essentially, an extended sonar sound group.
Open-source software platform for medical image segmentation applications
NASA Astrophysics Data System (ADS)
Namías, R.; D'Amato, J. P.; del Fresno, M.
2017-11-01
Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.
A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images
Luo, Yaozhong; Liu, Longzhong; Li, Xuelong
2017-01-01
Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB) segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI) on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO) algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%), the second highest TPVF (85.34%), and the second lowest FPVF (4.48%). PMID:28536703
Colour image segmentation using unsupervised clustering technique for acute leukemia images
NASA Astrophysics Data System (ADS)
Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.
2015-05-01
Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.
1990-05-20
in the fields of mobile robots and military systems. In both fields extensive use is made of a variety of dissimilar sensors to gather information (Luo...and Kay [27]). For example, a mobile robot might use both sonar and stereo imaging data to get a better estimate of the distance to the nearest wall...Estimation and Modulation Theory, volume 1. McGraw-Hill, 1968. [45] R. H. Volin. Techniques and aplications of mechanical signature analsysis. Shock
NASA Astrophysics Data System (ADS)
Xu, G.; Bemis, K. G.
2014-12-01
Seafloor hydrothermal systems feature intricate interconnections among oceanic, geological, hydrothermal, and biological processes. The advent of the NEPTUNE observatory operated by Ocean Networks Canada at the Endeavour Segment, Juan de Fuca Ridge enables scientists to study these interconnections through multidisciplinary, continuous, real-time observations. The multidisciplinary observatory instruments deployed at the Grotto Mound, a major study site of the NEPTUNE observatory, makes it a perfect place to study the response of a seafloor hydrothermal system to geological and oceanic processes. In this study, we use the multidisciplinary datasets recorded by the NEPTUNE Observatory instruments as observational tools to demonstrate two different aspects of the response of hydrothermal activity at the Grotto Mound to geological and oceanic processes. First, we investigate a recent increase in venting temperature and heat flux at Grotto observed by the Benthic and Resistivity Sensors (BARS) and the Cabled Observatory Vent Imaging Sonar (COVIS) respectively. This event started in Mar 2014 and is still evolving by the time of writing this abstract. An initial interpretation in light of the seismic data recorded by a neighboring ocean bottom seismometer on the NEPTUNE observatory suggests the temperature and heat flux increase is probably triggered by local seismic activities. Comparison of the observations with the results of a 1-D mathematical model simulation of hydrothermal sub-seafloor circulation elucidates the potential mechanisms underlying hydrothermal response to local earthquakes. Second, we observe significant tidal oscillations in the venting temperature time series recorded by BARS and the acoustic imaging of hydrothermal plumes by COVIS, which is evidence for hydrothermal response to ocean tides and currents. We interpret the tidal oscillations of venting temperature as a result of tidal loading on a poroelastic medium. We then invoke poroelastic theories to estimate the crustal permeability, a fundamental property of subsurface hydrothermal circulation, from the phase shift of the tidal oscillations of venting temperature relative to ambient ocean tides. These results together shed light on the influences of seismic and oceanic processes on a seafloor hydrothermal system.
NASA Astrophysics Data System (ADS)
Lurton, Xavier; Eleftherakis, Dimitrios; Augustin, Jean-Marie
2018-06-01
The sediment backscatter strength measured by multibeam echosounders is a key feature for seafloor mapping either qualitative (image mosaics) or quantitative (extraction of classifying features). An important phenomenon, often underestimated, is the dependence of the backscatter level on the azimuth angle imposed by the survey line directions: strong level differences at varying azimuth can be observed in case of organized roughness of the seabed, usually caused by tide currents over sandy sediments. This paper presents a number of experimental results obtained from shallow-water cruises using a 300-kHz multibeam echosounder and specially dedicated to the study of this azimuthal effect, with a specific configuration of the survey strategy involving a systematic coverage of reference areas following "compass rose" patterns. The results show for some areas a very strong dependence of the backscatter level, up to about 10-dB differences at intermediate oblique angles, although the presence of these ripples cannot be observed directly—neither from the bathymetry data nor from the sonar image, due to the insufficient resolution capability of the sonar. An elementary modeling of backscattering from rippled interfaces explains and comforts these observations. The consequences of this backscatter dependence upon survey azimuth on the current strategies of backscatter data acquisition and exploitation are discussed.
NASA Astrophysics Data System (ADS)
White, S. M.; McClinton, J. T.
2011-12-01
Beyond the ability of modern near-bottom sonar systems to deliver air-photo-like images of the seafloor to help guide fieldwork, there is a tremendous amount of information hidden within sonar data that is rarely exploited for geologic mapping. Seafloor texture, backscatter amplitude, seafloor slope and roughness data can provide clues about seafloor geology but not straightforward to interpret. We present techniques for seafloor classification in volcanic terrains that integrate the capability of high-resolution, near-bottom sonar instruments to cover extensive areas of seafloor with the ability of visual mapping to discriminate differences in volcanic terrain. These techniques are adapted from the standard practices of terrestrial remote-sensing for use in the deep seafloor volcanic environment. A combination of sonar backscatter and bathymetry is used to supplement the direct seafloor visual observations by geologists to make quasi-geologic thematic maps that are consistent, objective, and most importantly spatially complete. We have taken two approaches to producing thematic maps of the seafloor for the accurate mapping of fine-scale lava morphology (e.g. pillow, lobate and sheet lava) and for the differentiation of distinct seafloor terrain types on a larger scale (e.g. hummocky or smooth). Mapping lava morphology is most accurate using fuzzy logic capable of making inferences between similar morphotypes (e.g. pillow and lobate) and where high-resolution side-scan and bathymetry data coexist. We present examples of lava morphology maps from the Galápagos Spreading Center that show the results from several analyses using different types of input data. Lava morphology is an important source of information on volcanic emplacement and eruptive dynamics. Terrain modeling can be accomplished at any resolution level, depending on the desired use of the model. For volcanic processes, input data needs to be at the appropriate scale to resolve individual volcanic features on the seafloor (e.g. small haystacks and lava channels). We present examples from the East Pacific Rise, which shows that the number of volcanic terrains differs from the tectonic provinces defined by following the spreading axis. Our terrain modeling suggests that differences in ocean crust construction and evolution can be meaningfully identified and explored without a priori assumptions about the geologic processes in a given region.
Segmentation and learning in the quantitative analysis of microscopy images
NASA Astrophysics Data System (ADS)
Ruggiero, Christy; Ross, Amy; Porter, Reid
2015-02-01
In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.
Segmentation of medical images using explicit anatomical knowledge
NASA Astrophysics Data System (ADS)
Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee
1999-07-01
Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-07
... the Navy in SOCAL, the Hawaii Range Complex, and the Atlantic Fleet Active Sonar Training Study Area... estimated usage of two sonar systems, they remain well within the authorized 5-year source amounts and the... exercise report indicates that the Navy exceeded the average annual amount of two sonar systems during this...
50 CFR 218.80 - Specified activity and specified geographical region.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Study Area also includes U.S. Navy pierside locations where sonar maintenance and testing occurs within...): (A) M3—an average of 461 hours per year. (B) [Reserved] (vii) Swimmer Detection Sonar (SD): (A) SD1 and SD2—an average of 230 hours per year. (B) [Reserved] (viii) Forward Looking Sonar (FLS): (A) FLS2...
1996-12-16
the Invention 13 The present invention relates to planar sonar arrays. More 14 particularly, the invention relates to the arrangement of 15...transducer elements in planar sonar arrays. 16 (2) Description of the Prior Art 17 Conventional planar sonar array designs typically comprise 18 ceramic...signal 5 conditioners ( preamplifiers )/as short as possible. However, this 6 requirement complicates fabrication and provides little space to 7
Ultrasonic Methods for Human Motion Detection
2006-10-01
contacts. The active method utilizes continuous wave ultrasonic Doppler sonar . Human motions have unique Doppler signatures and their combination...The present article reports results of human motion investigations with help of CW ultrasonic Doppler sonar . Low-cost, low-power ultrasonic motion...have been developed for operation in air [10]. Benefits of using ultrasonic CW Doppler sonar included the low-cost, low-electric noise, small size
Oswald, Julie N; Norris, Thomas F; Yack, Tina M; Ferguson, Elizabeth L; Kumar, Anurag; Nissen, Jene; Bell, Joel
2016-01-01
Passive acoustic data collected from marine autonomous recording units deployed off Jacksonville, FL (from 13 September to 8 October 2009 and 3 December 2009 to 8 January 2010), were analyzed for detection of cetaceans and Navy sonar. Cetaceans detected included Balaenoptera acutorostrata, Eubalaena glacialis, B. borealis, Physeter macrocephalus, blackfish, and delphinids. E. glacialis were detected at shallow and, somewhat unexpectedly, deep sites. P. macrocephalus were characterized by a strong diel pattern. B. acutorostrata showed the strongest relationship between sonar activity and vocal behavior. These results provide a preliminary assessment of cetacean occurrence off Jacksonville and new insights on vocal responses to sonar.
Moisan, Emmanuel; Charbonnier, Pierre; Foucher, Philippe; Grussenmeyer, Pierre; Guillemin, Samuel; Koehl, Mathieu
2015-12-11
In this paper, we focus on the construction of a full 3D model of a canal tunnel by combining terrestrial laser (for its above-water part) and sonar (for its underwater part) scans collected from static acquisitions. The modeling of such a structure is challenging because the sonar device is used in a narrow environment that induces many artifacts. Moreover, the location and the orientation of the sonar device are unknown. In our approach, sonar data are first simultaneously denoised and meshed. Then, above- and under-water point clouds are co-registered to generate directly the full 3D model of the canal tunnel. Faced with the lack of overlap between both models, we introduce a robust algorithm that relies on geometrical entities and partially-immersed targets, which are visible in both the laser and sonar point clouds. A full 3D model, visually promising, of the entrance of a canal tunnel is obtained. The analysis of the method raises several improvement directions that will help with obtaining more accurate models, in a more automated way, in the limits of the involved technology.
Moisan, Emmanuel; Charbonnier, Pierre; Foucher, Philippe; Grussenmeyer, Pierre; Guillemin, Samuel; Koehl, Mathieu
2015-01-01
In this paper, we focus on the construction of a full 3D model of a canal tunnel by combining terrestrial laser (for its above-water part) and sonar (for its underwater part) scans collected from static acquisitions. The modeling of such a structure is challenging because the sonar device is used in a narrow environment that induces many artifacts. Moreover, the location and the orientation of the sonar device are unknown. In our approach, sonar data are first simultaneously denoised and meshed. Then, above- and under-water point clouds are co-registered to generate directly the full 3D model of the canal tunnel. Faced with the lack of overlap between both models, we introduce a robust algorithm that relies on geometrical entities and partially-immersed targets, which are visible in both the laser and sonar point clouds. A full 3D model, visually promising, of the entrance of a canal tunnel is obtained. The analysis of the method raises several improvement directions that will help with obtaining more accurate models, in a more automated way, in the limits of the involved technology. PMID:26690444
NASA Astrophysics Data System (ADS)
Coggins, Liah; Ghadouani, Anas; Ghisalberti, Marco
2014-05-01
Traditionally, bathymetry mapping of ponds, lakes and rivers have used techniques which are low in spatial resolution, sometimes subjective in terms of precision and accuracy, labour intensive, and that require a high level of safety precautions. In waste stabilisation ponds (WSP) in particular, sludge heights, and thus sludge volume, are commonly measured using a sludge judge (a clear plastic pipe with length markings). A remote control boat fitted with a GPS-equipped sonar unit can improve the resolution of depth measurements, and reduce safety and labour requirements. Sonar devices equipped with GPS technology, also known as fish finders, are readily available and widely used by people in boating. Through the use of GPS technology in conjunction with sonar, the location and depth can be recorded electronically onto a memory card. However, despite its high applicability to the field, this technology has so far been underutilised. In the case of WSP, the sonar can measure the water depth to the top of the sludge layer, which can then be used to develop contour maps of sludge distribution and to determine sludge volume. The coupling of sonar technology with a remotely operative vehicle has several advantages of traditional measurement techniques, particularly in removing human subjectivity of readings, and the sonar being able to collect more data points in a shorter period of time, and continuously, with a much higher spatial resolution. The GPS-sonar equipped remote control boat has been tested on in excess of 50 WSP within Western Australia, and has shown a very strong correlation (R2 = 0.98) between spot readings taken with the sonar compared to a sludge judge. This has shown that the remote control boat with GPS-sonar device is capable of providing sludge bathymetry with greatly increased spatial resolution, while greatly reducing profiling time. Remotely operated vehicles, such as the one built in this study, are useful for not only determining sludge distribution, but also in calculating sludge accumulation rates, and in evaluating pond hydraulic efficiency (e.g., as input bathymetry for computational fluid dynamics models). This technology is not limited to application for wastewater management, and could potentially have a wider application in the monitoring of other small to medium water bodies, including reservoirs, channels, recreational water bodies, river beds, mine tailings dams and commercial ports.
Wensveen, Paul J; Kvadsheim, Petter H; Lam, Frans-Peter A; von Benda-Beckmann, Alexander M; Sivle, Lise D; Visser, Fleur; Curé, Charlotte; Tyack, Peter L; Miller, Patrick J O
2017-11-15
Exposure to underwater sound can cause permanent hearing loss and other physiological effects in marine animals. To reduce this risk, naval sonars are sometimes gradually increased in intensity at the start of transmission ('ramp-up'). Here, we conducted experiments in which tagged humpback whales were approached with a ship to test whether a sonar operation preceded by ramp-up reduced three risk indicators - maximum sound pressure level (SPL max ), cumulative sound exposure level (SEL cum ) and minimum source-whale range ( R min ) - compared with a sonar operation not preceded by ramp-up. Whales were subject to one no-sonar control session and either two successive ramp-up sessions (RampUp1, RampUp2) or a ramp-up session (RampUp1) and a full-power session (FullPower). Full-power sessions were conducted only twice; for other whales we used acoustic modelling that assumed transmission of the full-power sequence during their no-sonar control. Averaged over all whales, risk indicators in RampUp1 ( n =11) differed significantly from those in FullPower ( n =12) by -3.0 dB (SPL max ), -2.0 dB (SEL cum ) and +168 m ( R min ), but not significantly from those in RampUp2 ( n =9). Only five whales in RampUp1, four whales in RampUp2 and none in FullPower or control sessions avoided the sound source. For RampUp1, we found statistically significant differences in risk indicators between whales that avoided the sonar and whales that did not: -4.7 dB (SPL max ), -3.4 dB (SEL cum ) and +291 m ( R min ). In contrast, for RampUp2, these differences were smaller and not significant. This study suggests that sonar ramp-up has a positive but limited mitigative effect for humpback whales overall, but that ramp-up can reduce the risk of harm more effectively in situations when animals are more responsive and likely to avoid the sonar, e.g. owing to novelty of the stimulus, when they are in the path of an approaching sonar ship. © 2017. Published by The Company of Biologists Ltd.
NASA Technical Reports Server (NTRS)
1988-01-01
Papers concerning remote sensing applications for exploration geology are presented, covering topics such as remote sensing technology, data availability, frontier exploration, and exploration in mature basins. Other topics include offshore applications, geobotany, mineral exploration, engineering and environmental applications, image processing, and prospects for future developments in remote sensing for exploration geology. Consideration is given to the use of data from Landsat, MSS, TM, SAR, short wavelength IR, the Geophysical Environmental Research Airborne Scanner, gas chromatography, sonar imaging, the Airborne Visible-IR Imaging Spectrometer, field spectrometry, airborne thermal IR scanners, SPOT, AVHRR, SIR, the Large Format camera, and multitimephase satellite photographs.
Debris avalanches and slumps on the margins of volcanic domes on Venus: Characteristics of deposits
NASA Technical Reports Server (NTRS)
Bulmer, M. H.; Guest, J. E.; Beretan, K.; Michaels, Gregory A.; Saunders, R. Stephen
1992-01-01
Modified volcanic domes, referred to as collapsed margin domes, have diameters greater than those of terrestrial domes and were therefore thought to have no suitable terrestrial analogue. Comparison of the collapsed debris using the Magellan SAR images with volcanic debris avalanches on Earth has revealed morphological similarities. Some volcanic features identified on the seafloor from sonar images have diameters similar to those on Venus and also display scalloped margins, indicating modification by collapse. Examination of the SAR images of collapsed dome features reveals a number of distinct morphologies to the collapsed masses. Ten examples of collapsed margin domes displaying a range of differing morphologies and collapsed masses have been selected and examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X; Rossi, P; Jani, A
Purpose: Transrectal ultrasound (TRUS) is the standard imaging modality for the image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy) due to its versatility and real-time capability. Accurate segmentation of the prostate plays a key role in biopsy needle placement, treatment planning, and motion monitoring. As ultrasound images have a relatively low signal-to-noise ratio (SNR), automatic segmentation of the prostate is difficult. However, manual segmentation during biopsy or radiation therapy can be time consuming. We are developing an automated method to address this technical challenge. Methods: The proposed segmentation method consists of two major stages: the training stage and the segmentation stage.more » During the training stage, patch-based anatomical features are extracted from the registered training images with patient-specific information, because these training images have been mapped to the new patient’ images, and the more informative anatomical features are selected to train the kernel support vector machine (KSVM). During the segmentation stage, the selected anatomical features are extracted from newly acquired image as the input of the well-trained KSVM and the output of this trained KSVM is the segmented prostate of this patient. Results: This segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentation. The mean volume Dice Overlap Coefficient was 89.7±2.3%, and the average surface distance was 1.52 ± 0.57 mm between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D ultrasound-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentation (gold standard). This segmentation technique could be a useful tool for image-guided interventions in prostate-cancer diagnosis and treatment. This research is supported in part by DOD PCRP Award W81XWH-13-1-0269, and National Cancer Institute (NCI) Grant CA114313.« less
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.
A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle
2016-03-08
On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system.
User-guided segmentation for volumetric retinal optical coherence tomography images
Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.
2014-01-01
Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962
User-guided segmentation for volumetric retinal optical coherence tomography images.
Yin, Xin; Chao, Jennifer R; Wang, Ruikang K
2014-08-01
Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method.
Reconstruction of incomplete cell paths through a 3D-2D level set segmentation
NASA Astrophysics Data System (ADS)
Hariri, Maia; Wan, Justin W. L.
2012-02-01
Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.
Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng
2015-01-01
Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells. PMID:26066315
Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion
NASA Astrophysics Data System (ADS)
Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Tade, Funmilayo; Schuster, David M.; Nieh, Peter; Master, Viraj; Fei, Baowei
2017-02-01
Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.
Clustering approach for unsupervised segmentation of malarial Plasmodium vivax parasite
NASA Astrophysics Data System (ADS)
Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Mohamed, Zeehaida
2017-10-01
Malaria is a global health problem, particularly in Africa and south Asia where it causes countless deaths and morbidity cases. Efficient control and prompt of this disease require early detection and accurate diagnosis due to the large number of cases reported yearly. To achieve this aim, this paper proposes an image segmentation approach via unsupervised pixel segmentation of malaria parasite to automate the diagnosis of malaria. In this study, a modified clustering algorithm namely enhanced k-means (EKM) clustering, is proposed for malaria image segmentation. In the proposed EKM clustering, the concept of variance and a new version of transferring process for clustered members are used to assist the assignation of data to the proper centre during the process of clustering, so that good segmented malaria image can be generated. The effectiveness of the proposed EKM clustering has been analyzed qualitatively and quantitatively by comparing this algorithm with two popular image segmentation techniques namely Otsu's thresholding and k-means clustering. The experimental results show that the proposed EKM clustering has successfully segmented 100 malaria images of P. vivax species with segmentation accuracy, sensitivity and specificity of 99.20%, 87.53% and 99.58%, respectively. Hence, the proposed EKM clustering can be considered as an image segmentation tool for segmenting the malaria images.
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.
Kuc, Roman
2018-04-01
This paper describes phase-sensitive and phase-insensitive processing of monaural echolocation waveforms to generate target maps. Composite waveforms containing both the emission and echoes are processed to estimate the target impulse response using an audible sonar. Phase-sensitive processing yields the composite signal envelope, while phase-insensitive processing that starts with the composite waveform power spectrum yields the envelope of the autocorrelation function. Analysis and experimental verification show that multiple echoes form an autocorrelation function that produces near-range phantom-reflector artifacts. These artifacts interfere with true target echoes when the first true echo occurs at a time that is less than the total duration of the target echoes. Initial comparison of phase-sensitive and phase-insensitive maps indicates that both display important target features, indicating that phase is not vital. A closer comparison illustrates the improved resolution of phase-sensitive processing, the near-range phantom-reflectors produced by phase-insensitive processing, and echo interference and multiple reflection artifacts that were independent of the processing.
Acoustic water bottom investigation with a remotely operated watercraft survey system
NASA Astrophysics Data System (ADS)
Yamasaki, Shintaro; Tabusa, Tomonori; Iwasaki, Shunsuke; Hiramatsu, Masahiro
2017-12-01
This paper describes a remotely operated investigation system developed by combining a modern leisure-use fish finder and an unmanned watercraft to survey water bottom topography and other data related to bottom materials. Current leisure-use fish finders have strong depth sounding capabilities and can provide precise sonar images and bathymetric information. Because these sonar instruments are lightweight and small, they can be used on unmanned small watercraft. With the developed system, an operator can direct the heading of an unmanned watercraft and monitor a PC display showing real-time positioning information through the use of onboard equipment and long-distance communication devices. Here, we explain how the system was developed and demonstrate the use of the system in an area of submerged woods in a lake. The system is low cost, easy to use, and mobile. It should be useful in surveying areas that have heretofore been hard to investigate, including remote, small, and shallow lakes, for example, volcanic and glacial lakes.
Le Faivre, Julien; Duhamel, Alain; Khung, Suonita; Faivre, Jean-Baptiste; Lamblin, Nicolas; Remy, Jacques; Remy-Jardin, Martine
2016-11-01
To evaluate the impact of CT perfusion imaging on the detection of peripheral chronic pulmonary embolisms (CPE). 62 patients underwent a dual-energy chest CT angiographic examination with (a) reconstruction of diagnostic and perfusion images; (b) enabling depiction of vascular features of peripheral CPE on diagnostic images and perfusion defects (20 segments/patient; total: 1240 segments examined). The interpretation of diagnostic images was of two types: (a) standard (i.e., based on cross-sectional images alone) or (b) detailed (i.e., based on cross-sectional images and MIPs). The segment-based analysis showed (a) 1179 segments analyzable on both imaging modalities and 61 segments rated as nonanalyzable on perfusion images; (b) the percentage of diseased segments was increased by 7.2 % when perfusion imaging was compared to the detailed reading of diagnostic images, and by 26.6 % when compared to the standard reading of images. At a patient level, the extent of peripheral CPE was higher on perfusion imaging, with a greater impact when compared to the standard reading of diagnostic images (number of patients with a greater number of diseased segments: n = 45; 72.6 % of the study population). Perfusion imaging allows recognition of a greater extent of peripheral CPE compared to diagnostic imaging. • Dual-energy computed tomography generates standard diagnostic imaging and lung perfusion analysis. • Depiction of CPE on central arteries relies on standard diagnostic imaging. • Detection of peripheral CPE is improved by perfusion imaging.
Denoising and segmentation of retinal layers in optical coherence tomography images
NASA Astrophysics Data System (ADS)
Dash, Puspita; Sigappi, A. N.
2018-04-01
Optical Coherence Tomography (OCT) is an imaging technique used to localize the intra-retinal boundaries for the diagnostics of macular diseases. Due to speckle noise, low image contrast and accurate segmentation of individual retinal layers is difficult. Due to this, a method for retinal layer segmentation from OCT images is presented. This paper proposes a pre-processing filtering approach for denoising and segmentation methods for segmenting retinal layers OCT images using graph based segmentation technique. These techniques are used for segmentation of retinal layers for normal as well as patients with Diabetic Macular Edema. The algorithm based on gradient information and shortest path search is applied to optimize the edge selection. In this paper the four main layers of the retina are segmented namely Internal limiting membrane (ILM), Retinal pigment epithelium (RPE), Inner nuclear layer (INL) and Outer nuclear layer (ONL). The proposed method is applied on a database of OCT images of both ten normal and twenty DME affected patients and the results are found to be promising.
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
Sivle, L D; Kvadsheim, P H; Fahlman, A; Lam, F P A; Tyack, P L; Miller, P J O
2012-01-01
Anthropogenic underwater sound in the environment might potentially affect the behavior of marine mammals enough to have an impact on their reproduction and survival. Diving behavior of four killer whales (Orcinus orca), seven long-finned pilot whales (Globicephala melas), and four sperm whales (Physeter macrocephalus) was studied during controlled exposures to naval sonar [low frequency active sonar (LFAS): 1-2 kHz and mid frequency active sonar (MFAS): 6-7 kHz] during three field seasons (2006-2009). Diving behavior was monitored before, during and after sonar exposure using an archival tag placed on the animal with suction cups. The tag recorded the animal's vertical movement, and additional data on horizontal movement and vocalizations were used to determine behavioral modes. Killer whales that were conducting deep dives at sonar onset changed abruptly to shallow diving (ShD) during LFAS, while killer whales conducting deep dives at the onset of MFAS did not alter dive mode. When in ShD mode at sonar onset, killer whales did not change their diving behavior. Pilot and sperm whales performed normal deep dives (NDD) during MFAS exposure. During LFAS exposures, long-finned pilot whales mostly performed fewer deep dives and some sperm whales performed shallower and shorter dives. Acoustic recording data presented previously indicates that deep diving (DD) is associated with feeding. Therefore, the observed changes in dive behavior of the three species could potentially reduce the foraging efficiency of the affected animals.
Sivle, L. D.; Kvadsheim, P. H.; Fahlman, A.; Lam, F. P. A.; Tyack, P. L.; Miller, P. J. O.
2012-01-01
Anthropogenic underwater sound in the environment might potentially affect the behavior of marine mammals enough to have an impact on their reproduction and survival. Diving behavior of four killer whales (Orcinus orca), seven long-finned pilot whales (Globicephala melas), and four sperm whales (Physeter macrocephalus) was studied during controlled exposures to naval sonar [low frequency active sonar (LFAS): 1–2 kHz and mid frequency active sonar (MFAS): 6–7 kHz] during three field seasons (2006–2009). Diving behavior was monitored before, during and after sonar exposure using an archival tag placed on the animal with suction cups. The tag recorded the animal's vertical movement, and additional data on horizontal movement and vocalizations were used to determine behavioral modes. Killer whales that were conducting deep dives at sonar onset changed abruptly to shallow diving (ShD) during LFAS, while killer whales conducting deep dives at the onset of MFAS did not alter dive mode. When in ShD mode at sonar onset, killer whales did not change their diving behavior. Pilot and sperm whales performed normal deep dives (NDD) during MFAS exposure. During LFAS exposures, long-finned pilot whales mostly performed fewer deep dives and some sperm whales performed shallower and shorter dives. Acoustic recording data presented previously indicates that deep diving (DD) is associated with feeding. Therefore, the observed changes in dive behavior of the three species could potentially reduce the foraging efficiency of the affected animals. PMID:23087648
A novel multiphoton microscopy images segmentation method based on superpixel and watershed.
Wu, Weilin; Lin, Jinyong; Wang, Shu; Li, Yan; Liu, Mingyu; Liu, Gaoqiang; Cai, Jianyong; Chen, Guannan; Chen, Rong
2017-04-01
Multiphoton microscopy (MPM) imaging technique based on two-photon excited fluorescence (TPEF) and second harmonic generation (SHG) shows fantastic performance for biological imaging. The automatic segmentation of cellular architectural properties for biomedical diagnosis based on MPM images is still a challenging issue. A novel multiphoton microscopy images segmentation method based on superpixels and watershed (MSW) is presented here to provide good segmentation results for MPM images. The proposed method uses SLIC superpixels instead of pixels to analyze MPM images for the first time. The superpixels segmentation based on a new distance metric combined with spatial, CIE Lab color space and phase congruency features, divides the images into patches which keep the details of the cell boundaries. Then the superpixels are used to reconstruct new images by defining an average value of superpixels as image pixels intensity level. Finally, the marker-controlled watershed is utilized to segment the cell boundaries from the reconstructed images. Experimental results show that cellular boundaries can be extracted from MPM images by MSW with higher accuracy and robustness. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Magnetic resonance brain tissue segmentation based on sparse representations
NASA Astrophysics Data System (ADS)
Rueda, Andrea
2015-12-01
Segmentation or delineation of specific organs and structures in medical images is an important task in the clinical diagnosis and treatment, since it allows to characterize pathologies through imaging measures (biomarkers). In brain imaging, segmentation of main tissues or specific structures is challenging, due to the anatomic variability and complexity, and the presence of image artifacts (noise, intensity inhomogeneities, partial volume effect). In this paper, an automatic segmentation strategy is proposed, based on sparse representations and coupled dictionaries. Image intensity patterns are singly related to tissue labels at the level of small patches, gathering this information in coupled intensity/segmentation dictionaries. This dictionaries are used within a sparse representation framework to find the projection of a new intensity image onto the intensity dictionary, and the same projection can be used with the segmentation dictionary to estimate the corresponding segmentation. Preliminary results obtained with two publicly available datasets suggest that the proposal is capable of estimating adequate segmentations for gray matter (GM) and white matter (WM) tissues, with an average overlapping of 0:79 for GM and 0:71 for WM (with respect to original segmentations).
Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen
2010-01-01
The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399
2011-01-01
Background Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods. Results We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation. Conclusions We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss/. PMID:21668958
Malik, Bilal H.; Jabbour, Joey M.; Maitland, Kristen C.
2015-01-01
Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard. PMID:25816131
Pulse Coupled Neural Networks for the Segmentation of Magnetic Resonance Brain Images.
1996-12-01
PULSE COUPLED NEURAL NETWORKS FOR THE SEGMENTATION OF MAGNETIC RESONANCE BRAIN IMAGES THESIS Shane Lee Abrahamson First Lieutenant, USAF AFIT/GCS/ENG...COUPLED NEURAL NETWORKS FOR THE SEGMENTATION OF MAGNETIC RESONANCE BRAIN IMAGES THESIS Shane Lee Abrahamson First Lieutenant, USAF AFIT/GCS/ENG/96D-01...research develops an automated method for segmenting Magnetic Resonance (MR) brain images based on Pulse Coupled Neural Networks (PCNN). MR brain image
Automated choroid segmentation of three-dimensional SD-OCT images by incorporating EDI-OCT images.
Chen, Qiang; Niu, Sijie; Fang, Wangyi; Shuai, Yuanlu; Fan, Wen; Yuan, Songtao; Liu, Qinghuai
2018-05-01
The measurement of choroidal volume is more related with eye diseases than choroidal thickness, because the choroidal volume can reflect the diseases comprehensively. The purpose is to automatically segment choroid for three-dimensional (3D) spectral domain optical coherence tomography (SD-OCT) images. We present a novel choroid segmentation strategy for SD-OCT images by incorporating the enhanced depth imaging OCT (EDI-OCT) images. The down boundary of the choroid, namely choroid-sclera junction (CSJ), is almost invisible in SD-OCT images, while visible in EDI-OCT images. During the SD-OCT imaging, the EDI-OCT images can be generated for the same eye. Thus, we present an EDI-OCT-driven choroid segmentation method for SD-OCT images, where the choroid segmentation results of the EDI-OCT images are used to estimate the average choroidal thickness and to improve the construction of the CSJ feature space of the SD-OCT images. We also present a whole registration method between EDI-OCT and SD-OCT images based on retinal thickness and Bruch's Membrane (BM) position. The CSJ surface is obtained with a 3D graph search in the CSJ feature space. Experimental results with 768 images (6 cubes, 128 B-scan images for each cube) from 2 healthy persons, 2 age-related macular degeneration (AMD) and 2 diabetic retinopathy (DR) patients, and 210 B-scan images from other 8 healthy persons and 21 patients demonstrate that our method can achieve high segmentation accuracy. The mean choroid volume difference and overlap ratio for 6 cubes between our proposed method and outlines drawn by experts were -1.96µm3 and 88.56%, respectively. Our method is effective for the 3D choroid segmentation of SD-OCT images because the segmentation accuracy and stability are compared with the manual segmentation. Copyright © 2017. Published by Elsevier B.V.
Joint multi-object registration and segmentation of left and right cardiac ventricles in 4D cine MRI
NASA Astrophysics Data System (ADS)
Ehrhardt, Jan; Kepp, Timo; Schmidt-Richberg, Alexander; Handels, Heinz
2014-03-01
The diagnosis of cardiac function based on cine MRI requires the segmentation of cardiac structures in the images, but the problem of automatic cardiac segmentation is still open, due to the imaging characteristics of cardiac MR images and the anatomical variability of the heart. In this paper, we present a variational framework for joint segmentation and registration of multiple structures of the heart. To enable the simultaneous segmentation and registration of multiple objects, a shape prior term is introduced into a region competition approach for multi-object level set segmentation. The proposed algorithm is applied for simultaneous segmentation of the myocardium as well as the left and right ventricular blood pool in short axis cine MRI images. Two experiments are performed: first, intra-patient 4D segmentation with a given initial segmentation for one time-point in a 4D sequence, and second, a multi-atlas segmentation strategy is applied to unseen patient data. Evaluation of segmentation accuracy is done by overlap coefficients and surface distances. An evaluation based on clinical 4D cine MRI images of 25 patients shows the benefit of the combined approach compared to sole registration and sole segmentation.
GPU accelerated fuzzy connected image segmentation by using CUDA.
Zhuge, Ying; Cao, Yong; Miller, Robert W
2009-01-01
Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.
Compound image segmentation of published biomedical figures.
Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit
2018-04-01
Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.
Automatic co-segmentation of lung tumor based on random forest in PET-CT images
NASA Astrophysics Data System (ADS)
Jiang, Xueqing; Xiang, Dehui; Zhang, Bin; Zhu, Weifang; Shi, Fei; Chen, Xinjian
2016-03-01
In this paper, a fully automatic method is proposed to segment the lung tumor in clinical 3D PET-CT images. The proposed method effectively combines PET and CT information to make full use of the high contrast of PET images and superior spatial resolution of CT images. Our approach consists of three main parts: (1) initial segmentation, in which spines are removed in CT images and initial connected regions achieved by thresholding based segmentation in PET images; (2) coarse segmentation, in which monotonic downhill function is applied to rule out structures which have similar standardized uptake values (SUV) to the lung tumor but do not satisfy a monotonic property in PET images; (3) fine segmentation, random forests method is applied to accurately segment the lung tumor by extracting effective features from PET and CT images simultaneously. We validated our algorithm on a dataset which consists of 24 3D PET-CT images from different patients with non-small cell lung cancer (NSCLC). The average TPVF, FPVF and accuracy rate (ACC) were 83.65%, 0.05% and 99.93%, respectively. The correlation analysis shows our segmented lung tumor volumes has strong correlation ( average 0.985) with the ground truth 1 and ground truth 2 labeled by a clinical expert.
Development of Mid-Frequency Multibeam Sonar for Fisheries Applications
2006-01-01
Development of Mid-Frequency Multibeam Sonar for Fisheries Applications John K. Horne University of Washington, School of Aquatic and Fishery ...AND SUBTITLE Development of Mid-Frequency Multibeam Sonar for Fisheries Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...Washington,School of Aquatic and Fishery Sciences,Box 355020,Seattle,WA,98195 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-26
... sonar use in 2009 was less than planned such that a recalculation of marine mammal takes suggests a... contemplated in light of the overall underuse of sonar proposed and actually used in 2009 (and the likelihood... sonar sources in 2009, the authorization of the same amount of take for 2010 as was authorized in 2009...
Submerged Object Detection and Classification System
1993-04-16
example of this type of system is a conventional sonar device wherein a highly directional beam of sonic energy periodically radiates from a...scanning transducer which in turn operates as a receiver to detect echoes reflected from any object within the path of 15 propagation. Sonar devices...classification, which requires relatively high frequency signals. Sonar devices also have the shortcoming of sensing background noise generated by
Automated Detection of a Crossing Contact Based on Its Doppler Shift
2009-03-01
contacts in passive sonar systems. A common approach is the application of high- gain processing followed by successive classification criteria. Most...contacts in passive sonar systems. A common approach is the application of high-gain processing followed by successive classification criteria...RESEARCH MOTIVATION The trade-off between the false alarm and detection probability is fundamental in radar and sonar . (Chevalier, 2002) A common
Adaptive Sampling in Autonomous Marine Sensor Networks
2006-06-01
Analog Processing Section A high-performance preamplifier with low noise characteristics is vital to obtaining quality sonar data. The preamplifier ...research assistantships through the Generic Ocean Array Technology Sonar (GOATS) project, contract N00014-97-1-0202 and contract N00014-05-G-0106 Delivery...Formation Behavior ..................................... 60 5 An AUV Intelligent Sensor for Real-Time Adaptive Sensing 63 5.1 A Logical Sonar Sensor
Boundary segmentation for fluorescence microscopy using steerable filters
NASA Astrophysics Data System (ADS)
Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.
2017-02-01
Fluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.
Elimination of RF inhomogeneity effects in segmentation.
Agus, Onur; Ozkan, Mehmed; Aydin, Kubilay
2007-01-01
There are various methods proposed for the segmentation and analysis of MR images. However the efficiency of these techniques is effected by various artifacts that occur in the imaging system. One of the most encountered problems is the intensity variation across an image. To overcome this problem different methods are used. In this paper we propose a method for the elimination of intensity artifacts in segmentation of MRI images. Inter imager variations are also minimized to produce the same tissue segmentation for the same patient. A well-known multivariate classification algorithm, maximum likelihood is employed to illustrate the enhancement in segmentation.
FogBank: a single cell segmentation across multiple cell lines and image modalities.
Chalfoun, Joe; Majurski, Michael; Dima, Alden; Stuelten, Christina; Peskin, Adele; Brady, Mary
2014-12-30
Many cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies. We present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation. First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce. We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images. FogBank produces single cell segmentation from confluent cell sheets with high accuracy. It can be applied to microscopy images of multiple cell lines and a variety of imaging modalities. The code for the segmentation method is available as open-source and includes a Graphical User Interface for user friendly execution.
Image segmentation evaluation for very-large datasets
NASA Astrophysics Data System (ADS)
Reeves, Anthony P.; Liu, Shuang; Xie, Yiting
2016-03-01
With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.
Validation tools for image segmentation
NASA Astrophysics Data System (ADS)
Padfield, Dirk; Ross, James
2009-02-01
A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.
3D Texture Features Mining for MRI Brain Tumor Identification
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Saba, Tanzila; Nayer, Fatima; Syed, Afraz Zahra
2014-03-01
Medical image segmentation is a process to extract region of interest and to divide an image into its individual meaningful, homogeneous components. Actually, these components will have a strong relationship with the objects of interest in an image. For computer-aided diagnosis and therapy process, medical image segmentation is an initial mandatory step. Medical image segmentation is a sophisticated and challenging task because of the sophisticated nature of the medical images. Indeed, successful medical image analysis heavily dependent on the segmentation accuracy. Texture is one of the major features to identify region of interests in an image or to classify an object. 2D textures features yields poor classification results. Hence, this paper represents 3D features extraction using texture analysis and SVM as segmentation technique in the testing methodologies.
Development of a novel 2D color map for interactive segmentation of histological images.
Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D
2012-05-01
We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.
Live minimal path for interactive segmentation of medical images
NASA Astrophysics Data System (ADS)
Chartrand, Gabriel; Tang, An; Chav, Ramnada; Cresson, Thierry; Chantrel, Steeve; De Guise, Jacques A.
2015-03-01
Medical image segmentation is nowadays required for medical device development and in a growing number of clinical and research applications. Since dedicated automatic segmentation methods are not always available, generic and efficient interactive tools can alleviate the burden of manual segmentation. In this paper we propose an interactive segmentation tool based on image warping and minimal path segmentation that is efficient for a wide variety of segmentation tasks. While the user roughly delineates the desired organs boundary, a narrow band along the cursors path is straightened, providing an ideal subspace for feature aligned filtering and minimal path algorithm. Once the segmentation is performed on the narrow band, the path is warped back onto the original image, precisely delineating the desired structure. This tool was found to have a highly intuitive dynamic behavior. It is especially efficient against misleading edges and required only coarse interaction from the user to achieve good precision. The proposed segmentation method was tested for 10 difficult liver segmentations on CT and MRI images, and the resulting 2D overlap Dice coefficient was 99% on average..
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Y; Olsen, J.; Parikh, P.
2014-06-01
Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE),more » along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information, different filtering methods and their influences on the segmentation results. Parag Parikh receives research grant from ViewRay. Sasa Mutic has consulting and research agreements with ViewRay. Yanle Hu receives travel reimbursement from ViewRay. Iwan Kawrakow and James Dempsey are ViewRay employees.« less
Automatic tissue image segmentation based on image processing and deep learning
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.
Glisson, Courtenay L; Altamar, Hernan O; Herrell, S Duke; Clark, Peter; Galloway, Robert L
2011-11-01
Image segmentation is integral to implementing intraoperative guidance for kidney tumor resection. Results seen in computed tomography (CT) data are affected by target organ physiology as well as by the segmentation algorithm used. This work studies variables involved in using level set methods found in the Insight Toolkit to segment kidneys from CT scans and applies the results to an image guidance setting. A composite algorithm drawing on the strengths of multiple level set approaches was built using the Insight Toolkit. This algorithm requires image contrast state and seed points to be identified as input, and functions independently thereafter, selecting and altering method and variable choice as needed. Semi-automatic results were compared to expert hand segmentation results directly and by the use of the resultant surfaces for registration of intraoperative data. Direct comparison using the Dice metric showed average agreement of 0.93 between semi-automatic and hand segmentation results. Use of the segmented surfaces in closest point registration of intraoperative laser range scan data yielded average closest point distances of approximately 1 mm. Application of both inverse registration transforms from the previous step to all hand segmented image space points revealed that the distance variability introduced by registering to the semi-automatically segmented surface versus the hand segmented surface was typically less than 3 mm both near the tumor target and at distal points, including subsurface points. Use of the algorithm shortened user interaction time and provided results which were comparable to the gold standard of hand segmentation. Further, the use of the algorithm's resultant surfaces in image registration provided comparable transformations to surfaces produced by hand segmentation. These data support the applicability and utility of such an algorithm as part of an image guidance workflow.
Task-oriented lossy compression of magnetic resonance images
NASA Astrophysics Data System (ADS)
Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques
1996-04-01
A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.
[Evaluation of Image Quality of Readout Segmented EPI with Readout Partial Fourier Technique].
Yoshimura, Yuuki; Suzuki, Daisuke; Miyahara, Kanae
Readout segmented EPI (readout segmentation of long variable echo-trains: RESOLVE) segmented k-space in the readout direction. By using the partial Fourier method in the readout direction, the imaging time was shortened. However, the influence on image quality due to insufficient data sampling is concerned. The setting of the partial Fourier method in the readout direction in each segment was changed. Then, we examined signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and distortion ratio for changes in image quality due to differences in data sampling. As the number of sampling segments decreased, SNR and CNR showed a low value. In addition, the distortion ratio did not change. The image quality of minimum sampling segments is greatly different from full data sampling, and caution is required when using it.
NASA Astrophysics Data System (ADS)
Meng, Qing-Hao; Yao, Zhen-Jing; Peng, Han-Yang
2009-12-01
Both the energy efficiency and correlation characteristics are important in airborne sonar systems to realize multichannel ultrasonic transducers working together. High energy efficiency can increase echo energy and measurement range, and sharp autocorrelation and flat cross correlation can help eliminate cross-talk among multichannel transducers. This paper addresses energy efficiency optimization under the premise that cross-talk between different sonar transducers can be avoided. The nondominated sorting genetic algorithm-II is applied to optimize both the spectrum and correlation characteristics of the excitation sequence. The central idea of the spectrum optimization is to distribute most of the energy of the excitation sequence within the frequency band of the sonar transducer; thus, less energy is filtered out by the transducers. Real experiments show that a sonar system consisting of eight-channel Polaroid 600 series electrostatic transducers excited with 2 ms optimized pulse-position-modulation sequences can work together without cross-talk and can measure distances up to 650 cm with maximal 1% relative error.
Development of a 2 MHz Sonar Sensor for Inspection of Bridge Substructures.
Park, Chul; Kim, Youngseok; Lee, Heungsu; Choi, Sangsik; Jung, Haewook
2018-04-16
Hydraulic factors account for a large part of the causes of bridge collapse. Due to the nature of the underwater environment, quick and accurate inspection is required when damage occurs. In this study, we developed a 2 MHz side scan sonar sensor module and effective operation technique by improving the limitations of existing sonar. Through field tests, we analyzed the correlation of factors affecting the resolution of the sonar data such as the angle of survey, the distance from the underwater structure and the water depth. The effect of the distance and the water depth and the structure on the survey angle was 66~82%. We also derived the relationship between these factors as a regression model for effective operating techniques. It is considered that application of the developed 2 MHz side scan sonar and its operation method could contribute to prevention of bridge collapses and disasters by quickly and accurately checking the damage of bridge substructures due to hydraulic factors.
Coherent and Noncoherent Joint Processing of Sonar for Detection of Small Targets in Shallow Water
Jiang, Jingning; Li, Si; Ding, Zhenping; Pan, Chen; Gong, Xianyi
2018-01-01
A coherent-noncoherent joint processing framework is proposed for active sonar to combine diversity gain and beamforming gain for detection of a small target in shallow water environments. Sonar utilizes widely-spaced arrays to sense environments and illuminate a target of interest from multiple angles. Meanwhile, it exploits spatial diversity for time-reversal focusing to suppress reverberation, mainly strong bottom reverberation. For enhancement of robustness of time-reversal focusing, an adaptive iterative strategy is utilized in the processing framework. A probing signal is firstly transmitted and echoes of a likely target are utilized as steering vectors for the second transmission. With spatial diversity, target bearing and range are estimated using a broadband signal model. Numerical simulations show that the novel sonar outperforms the traditional phased-array sonar due to benefits of spatial diversity. The effectiveness of the proposed framework has been validated by localization of a small target in at-lake experiments. PMID:29642637
Development of a 2 MHz Sonar Sensor for Inspection of Bridge Substructures
Park, Chul; Lee, Heungsu; Choi, Sangsik; Jung, Haewook
2018-01-01
Hydraulic factors account for a large part of the causes of bridge collapse. Due to the nature of the underwater environment, quick and accurate inspection is required when damage occurs. In this study, we developed a 2 MHz side scan sonar sensor module and effective operation technique by improving the limitations of existing sonar. Through field tests, we analyzed the correlation of factors affecting the resolution of the sonar data such as the angle of survey, the distance from the underwater structure and the water depth. The effect of the distance and the water depth and the structure on the survey angle was 66~82%. We also derived the relationship between these factors as a regression model for effective operating techniques. It is considered that application of the developed 2 MHz side scan sonar and its operation method could contribute to prevention of bridge collapses and disasters by quickly and accurately checking the damage of bridge substructures due to hydraulic factors. PMID:29659557
Enhanced Sidescan-Sonar Imagery Offshore of Southeastern Massachusetts
Poppe, Lawrence J.; McMullen, Kate Y.; Williams, S. Jeffress; Ackerman, Seth D.; Glomb, K.A.; Forfinski, N.A.
2008-01-01
The U.S. Geological Survey (USGS), National Oceanic and Atmospheric Administration (NOAA), and Massachusetts Office of Coastal Zone Management (CZM) have been working cooperatively to map and study the coastal sea floor. The sidescan-sonar imagery collected during NOAA hydrographic surveys has been included as part of these studies. However, the original sonar imagery contains tonal artifacts from environmental noise (for example, sea state), equipment settings (for example, power and gain changes), and processing (for example, inaccurate cross-track and line-to-line normalization), which impart a quilt-like patchwork appearance to the mosaics. These artifacts can obscure the normalized backscatter properties of the sea floor. To address this issue, sidescan-sonar imagery from surveys H11076 and H11079 offshore of southeastern Massachusetts was enhanced by matching backscatter tones of adjacent sidescan-sonar lines. These mosaics provide continuous grayscale perspectives of the backscatter, more accurately reveal the sea-floor geologic trends, and minimize the environment-, acquisition-, and processing-related noise.
Estimation and Correction of Geometric Distortions in Side-Scan Sonar Images
1990-03-01
Dissertation Funding was provided by the Conselho Nacional de Desinvolvemento Cientifico e Tecnologico (CNPq), an agency of the Government of the Federative...sponsorship of the Conselho Nacional de Desenvolvi- mento Cientifico e Tecnoldgico (CNPq), an agency of the Government of the Federative Republic of Brazil, and...Desenvolvimento Cientifico e Tecnol6gico (CNPq). The facilities used at MIT were maintained in part by grants from the National Science Foundation and the
Detection of bone disease by hybrid SST-watershed x-ray image segmentation
NASA Astrophysics Data System (ADS)
Sanei, Saeid; Azron, Mohammad; Heng, Ong Sim
2001-07-01
Detection of diagnostic features from X-ray images is favorable due to the low cost of these images. Accurate detection of the bone metastasis region greatly assists physicians to monitor the treatment and to remove the cancerous tissue by surgery. A hybrid SST-watershed algorithm, here, efficiently detects the boundary of the diseased regions. Shortest Spanning Tree (SST), based on graph theory, is one of the most powerful tools in grey level image segmentation. The method converts the images into arbitrary-shape closed segments of distinct grey levels. To do that, the image is initially mapped to a tree. Then using RSST algorithm the image is segmented to a certain number of arbitrary-shaped regions. However, in fine segmentation, over-segmentation causes loss of objects of interest. In coarse segmentation, on the other hand, SST-based method suffers from merging the regions belonged to different objects. By applying watershed algorithm, the large segments are divided into the smaller regions based on the number of catchment's basins for each segment. The process exploits bi-level watershed concept to separate each multi-lobe region into a number of areas each corresponding to an object (in our case a cancerous region of the bone,) disregarding their homogeneity in grey level.
Finite grade pheromone ant colony optimization for image segmentation
NASA Astrophysics Data System (ADS)
Yuanjing, F.; Li, Y.; Liangjun, K.
2008-06-01
By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.
Sonar Transducer Reliability Improvement Program (STRIP)
1981-01-01
Fair *[51] EPDM NORDOL 1370 - Poor *[511 NATURAL 1155- Poor *[51] NITRILE 6100 - Good *[51] VITON CTBN (BF635075) - Poor *[511 CORK- RUBBER ... aging problems have been found. A report entitled "Reliability and Service Life Concepts for Sonar Transducer Applications" has been completed. - A draft...or aging problems have been found. See Section 9. * A report entitled "Reliability and Service Life Concepts for Sonar Transducer Applications" has
Development of Mid-Frequency Multibeam Sonar for Fisheries Applications
2007-01-01
Development of Mid-Frequency Multibeam Sonar for Fisheries Applications John K. Horne University of Washington, School of Aquatic and Fishery ...SUBTITLE Development of Mid-Frequency Multibeam Sonar for Fisheries Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...School of Aquatic and Fishery Sciences,Box 355020 ,Seattle,WA,98195 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND
Comparing the Foraging Efficiency of Beaked Whales On and Off Naval Ranges
2015-09-30
assessing population level effects. OBJECTIVES The overall objective of this project is to improve the understanding of sonar disturbance on...island (SA), where naval sonar is not regularly used. • Opportunistically DTAG Blainville’s beaked whales in known displacement habitats within TOTO...the primary objective was to: • Deploy DTAGs at South Abaco island (SA), where naval sonar is not regularly used. • Quantify the diving behavior
3S2: Behavioral Response Studies of Cetaceans to Navy Sonar Signals in Norwegian Waters
2013-09-30
exposures of killer (Orcinus orca), long-finned pilot (Globicephala melas ), and sperm whales (Physeter macrocephalus) to naval sonar. Aquatic Mammals 38...pilot whales (Globicephala melas ). Marine Mammal Science. [in review, refereed] 8 Kvadsheim, PH, Miller, PJO, Tyack, P, Sivle, LD, Lam, FPA, and...killer (Orcinus orca), long-finned pilot (Globicephala melas ), and sperm whales (Physeter macrocephalus) to naval sonar. Aquatic Mammals 38: 362-401
Beaked Whale Anatomy, Field Studies and Habitat Modeling
2007-11-01
the notion that dual sonar sources interfere constructively to form a sonar beam in front of the animal. This is consistent with how the biosonar ...long been recognized as components of a sophisticated biosonar system. This sonar system has three categorical divisions: the sound generation and... biosonar signals in deep diving animals. These newly described transmission pathways are reminiscent of the configuration that would be seen in a sperm
Time of Flight Estimation in the Presence of Outliers: A Biosonar-Inspired Machine Learning Approach
2013-08-29
REPORT Time of Flight Estimation in the Presence of Outliers: A biosonar -inspired machine learning approach 14. ABSTRACT 16. SECURITY CLASSIFICATION OF...installations, biosonar , remote sensing, sonar resolution, sonar accuracy, sonar energy consumption Nathan Intrator, Leon N Cooper Brown University...Presence of Outliers: A biosonar -inspired machine learning approach Report Title ABSTRACT When the Signal-to-Noise Ratio (SNR) falls below a certain
2008-01-01
backscatter at a single narrowband frequency, and some AUVs carry single-frequency sidescan sonars (and this technology has been adapted for gliders), the...broadband acoustic scattering system by adapting existing technology that has been recently developed at WHOI for a monostatic Doppler sonar module...broadband acoustic backscattering system: 1) Modifications to the monostatic Doppler sonar module, recently developed at WHOI for turbulence studies
Beaked Whales Respond to Simulated and Actual Navy Sonar
2011-03-14
predator recognition in harbour seals. Nature 420: 171–173. 34. Ford JKB (1989) Acoustic behavior of resident killer whales (Orcinus orca) off Vancouver...acoustic exposure and behavioral reactions of beaked whales to one controlled exposure each of simulated military sonar, killer whale calls, and band...of simulated military sonar, killer whale calls, and band-limited noise. The beaked whales reacted to these three sound playbacks at sound pressure
Doksaeter, Lise; Rune Godo, Olav; Olav Handegard, Nils; Kvadsheim, Petter H; Lam, Frans-Peter A; Donovan, Carl; Miller, Patrick J O
2009-01-01
Military antisubmarine sonars produce intense sounds within the hearing range of most clupeid fish. The behavioral reactions of overwintering herring (Clupea harengus) to sonar signals of two different frequency ranges (1-2 and 6-7 kHz), and to playback of killer whale feeding sounds, were tested in controlled exposure experiments in Vestfjorden, Norway, November 2006. The behavior of free ranging herring was monitored by two upward-looking echosounders. A vessel towing an operational naval sonar source approached and passed over one of them in a block design setup. No significant escape reactions, either vertically or horizontally, were detected in response to sonar transmissions. Killer whale feeding sounds induced vertical and horizontal movements of herring. The results indicate that neither transmission of 1-2 kHz nor 6-7 kHz have significant negative influence on herring on the received sound pressure level tested (127-197 and 139-209 dB(rms) re 1 microPa, respectively). Military sonars of such frequencies and source levels may thus be operated in areas of overwintering herring without substantially affecting herring behavior or herring fishery. The avoidance during playback of killer whale sounds demonstrates the nature of an avoidance reaction and the ability of the experimental design to reveal it.
Gain control in the sonar of odontocetes.
Ya Supin, Alexander; Nachtigall, Paul E
2013-06-01
The sonar of odontocetes processes echo-signals within a wide range of echo levels. The level of echoes varies widely by tens of decibels depending on the level of the emitted sonar pulse, the target strength, the distance to the target, and the sound absorption by the water media. The auditory system of odontocetes must be capable of effective perception, analysis, and discrimination of echo-signals within all this variability. The sonar of odontocetes has several mechanisms to compensate for the echo-level variation (gain control). To date, several mechanisms of the biosonar gain control have been revealed in odontocetes: (1) adjustment of emitted sonar pulse levels (the longer the distance to the target, the higher the level of the emitted pulse), (2) short-term variation of hearing sensitivity based on forward masking of the echo by the preceding self-heard emitted pulse and subsequent release from the masking, and (3) active long-term control of hearing sensitivity. Recent investigations with the use of the auditory evoked-potential technique have demonstrated that these mechanisms effectively minimize the variation of the response to the echo when either the emitted sonar pulse level, or the target distance, or both vary within a wide range. A short review of these data is presented herein.
Lee, Wu-Jung; Moss, Cynthia F
2016-05-01
It has long been postulated that the elongated hindwing tails of many saturniid moths have evolved to create false sonar targets to divert the attack of echolocation-guided bat predators. However, rigorous echo-acoustic evidence to support this hypothesis has been lacking. In this study, fluttering luna moths (Actias luna), a species with elongated hindwing tails, were ensonified with frequency modulated chirp signals from all angles of orientation and across the wingbeat cycle. High-speed stereo videography was combined with pulse compression sonar processing to characterize the echo information available to foraging bats. Contrary to previous suggestions, the results show that the tail echoes are weak and do not dominate the sonar returns, compared to the large, planar wings and the moth body. However, the distinctive twisted morphology of the tails create persistent echoes across all angles of orientation, which may induce erroneous sonar target localization and disrupt accurate tracking by echolocating bats. These findings thus suggest a refinement of the false target hypothesis to emphasize sonar localization errors induced by the twisted tails, and highlight the importance of physics-based approaches to study the sensory information involved in the evolutionary arms race between moths and their bat predators.
Doksæter, Lise; Handegard, Nils Olav; Godø, Olav Rune; Kvadsheim, Petter H; Nordlund, Nina
2012-02-01
Atlantic herring, Clupea harengus, is a hearing specialist, and several studies have demonstrated strong responses to man-made noise, for example, from an approaching vessel. To avoid negative impacts from naval sonar operations, a set of studies of reaction patters of herring to low-frequency (1.0-1.5 kHz) naval sonar signals has been undertaken. This paper presents herring reactions to sonar signals and other stimuli when kept in captivity under detailed acoustic and video monitoring. Throughout the experiment, spanning three seasons of a year, the fish did not react significantly to sonar signals from a passing frigate, at received root-mean-square sound-pressure level (SPL) up to 168 dB re 1 μPa. In contrast, the fish did exhibit a significant diving reaction when exposed to other sounds, with a much lower SPL, e.g., from a two-stroke engine. This shows that the experimental setup is sensitive to herring reactions when occurring. The lack of herring reaction to sonar signals is consistent with earlier in situ behavioral studies. The complexity of the behavioral reactions in captivity underline the need for better understanding of the causal relationship between stimuli and reaction patterns of fish. © 2012 Acoustical Society of America
Examining the robustness of automated aural classification of active sonar echoes.
Murphy, Stefan M; Hines, Paul C
2014-02-01
Active sonar systems are used to detect underwater man-made objects of interest (targets) that are too quiet to be reliably detected with passive sonar. Performance of active sonar can be degraded by false alarms caused by echoes returned from geological seabed structures (clutter) in shallow regions. To reduce false alarms, a method of distinguishing target echoes from clutter echoes is required. Research has demonstrated that perceptual-based signal features similar to those employed in the human auditory system can be used to automatically discriminate between target and clutter echoes, thereby reducing the number of false alarms and improving sonar performance. An active sonar experiment on the Malta Plateau in the Mediterranean Sea was conducted during the Clutter07 sea trial and repeated during the Clutter09 sea trial. The dataset consists of more than 95,000 pulse-compressed echoes returned from two targets and many geological clutter objects. These echoes were processed using an automatic classifier that quantifies the timbre of each echo using a number of perceptual signal features. Using echoes from 2007, the aural classifier was trained to establish a boundary between targets and clutter in the feature space. Temporal robustness was then investigated by testing the classifier on echoes from the 2009 experiment.
Bagci, Ulas; Udupa, Jayaram K.; Mendhiratta, Neil; Foster, Brent; Xu, Ziyue; Yao, Jianhua; Chen, Xinjian; Mollura, Daniel J.
2013-01-01
We present a novel method for the joint segmentation of anatomical and functional images. Our proposed methodology unifies the domains of anatomical and functional images, represents them in a product lattice, and performs simultaneous delineation of regions based on random walk image segmentation. Furthermore, we also propose a simple yet effective object/background seed localization method to make the proposed segmentation process fully automatic. Our study uses PET, PET-CT, MRI-PET, and fused MRI-PET-CT scans (77 studies in all) from 56 patients who had various lesions in different body regions. We validated the effectiveness of the proposed method on different PET phantoms as well as on clinical images with respect to the ground truth segmentation provided by clinicians. Experimental results indicate that the presented method is superior to threshold and Bayesian methods commonly used in PET image segmentation, is more accurate and robust compared to the other PET-CT segmentation methods recently published in the literature, and also it is general in the sense of simultaneously segmenting multiple scans in real-time with high accuracy needed in routine clinical use. PMID:23837967
Sjöberg, C; Ahnesjö, A
2013-06-01
Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Infrared image segmentation method based on spatial coherence histogram and maximum entropy
NASA Astrophysics Data System (ADS)
Liu, Songtao; Shen, Tongsheng; Dai, Yao
2014-11-01
In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.
Graph run-length matrices for histopathological image segmentation.
Tosun, Akif Burak; Gunduz-Demir, Cigdem
2011-03-01
The histopathological examination of tissue specimens is essential for cancer diagnosis and grading. However, this examination is subject to a considerable amount of observer variability as it mainly relies on visual interpretation of pathologists. To alleviate this problem, it is very important to develop computational quantitative tools, for which image segmentation constitutes the core step. In this paper, we introduce an effective and robust algorithm for the segmentation of histopathological tissue images. This algorithm incorporates the background knowledge of the tissue organization into segmentation. For this purpose, it quantifies spatial relations of cytological tissue components by constructing a graph and uses this graph to define new texture features for image segmentation. This new texture definition makes use of the idea of gray-level run-length matrices. However, it considers the runs of cytological components on a graph to form a matrix, instead of considering the runs of pixel intensities. Working with colon tissue images, our experiments demonstrate that the texture features extracted from "graph run-length matrices" lead to high segmentation accuracies, also providing a reasonable number of segmented regions. Compared with four other segmentation algorithms, the results show that the proposed algorithm is more effective in histopathological image segmentation.
NASA Astrophysics Data System (ADS)
Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas
1996-04-01
The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.
Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation
Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang
2015-01-01
The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829
An algorithm for calculi segmentation on ureteroscopic images.
Rosa, Benoît; Mozer, Pierre; Szewczyk, Jérôme
2011-03-01
The purpose of the study is to develop an algorithm for the segmentation of renal calculi on ureteroscopic images. In fact, renal calculi are common source of urological obstruction, and laser lithotripsy during ureteroscopy is a possible therapy. A laser-based system to sweep the calculus surface and vaporize it was developed to automate a very tedious manual task. The distal tip of the ureteroscope is directed using image guidance, and this operation is not possible without an efficient segmentation of renal calculi on the ureteroscopic images. We proposed and developed a region growing algorithm to segment renal calculi on ureteroscopic images. Using real video images to compute ground truth and compare our segmentation with a reference segmentation, we computed statistics on different image metrics, such as Precision, Recall, and Yasnoff Measure, for comparison with ground truth. The algorithm and its parameters were established for the most likely clinical scenarii. The segmentation results are encouraging: the developed algorithm was able to correctly detect more than 90% of the surface of the calculi, according to an expert observer. Implementation of an algorithm for the segmentation of calculi on ureteroscopic images is feasible. The next step is the integration of our algorithm in the command scheme of a motorized system to build a complete operating prototype.
A segmentation algorithm based on image projection for complex text layout
NASA Astrophysics Data System (ADS)
Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang
2017-10-01
Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.
A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle
2016-03-01
On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system. PACS number(s): 87.57.nm, 87.57.N-, 87.61.Tg. © 2016 The Authors.
Irving, David B.; Finn, James E.; Larson, James P.
1995-01-01
We began a three year study in 1987 to test the feasibility of using sonar in the Togiak River to estimate salmon escapements. Current methods rely on periodic aerial surveys and a counting tower at river kilometer 97. Escapement estimates are not available until 10 to 14 days after the salmon enter the river. Water depth and turbidity preclude relocating the tower to the lower river and affect the reliability of aerial surveys. To determine whether an alternative method could be developed to improve the timeliness and accuracy of current escapement monitoring, Bendix sonar units were operated during 1987, 1988, and 1990. Two sonar stations were set up opposite each other at river kilometer 30 and were operated 24 hours per day, seven days per week. Catches from gill nets with 12, 14, and 20 cm stretch mesh, a beach seine, and visual observations were used to estimate species composition. Length and sex data were collected from salmon caught in the nets to assess sampling bias.In 1987, sonar was used to select optimal sites and enumerate coho salmon. In 1988 and 1990, the sites identified in 1987 were used to estimate the escapement of five salmon species. Sockeye salmon escapement was estimated at 512,581 and 589,321, chinook at 7,698 and 15,098, chum at 246,144 and 134,958, coho at 78,588 and 28,290, and pink at 96,167 and 131,484. Sonar estimates of sockeye salmon were two to three times the Alaska Department of Fish and Game's escapement estimate based on aerial surveys and tower counts. The source of error was probably a combination of over-estimating the total number of targets counted by the sonar and by incorrectly estimating species composition.Total salmon escapement estimates using sonar may be feasible but several more years of development are needed. Because of the overlapped salmon run timing, estimating species composition appears the most difficult aspect of using sonar for management. Possible improvements include using a larger beach seine or selecting gill net mesh sizes evenly spaced between 10 and 20 cm stretch mesh.Salmon counts at river kilometer 30 would reduce the lag time between salmon river entry and the escapement estimate to 2-5 days. Any further decrease in lag time, however, would require moving the sonar operations downriver into less desirable braided portions of the river.
Multiresolution multiscale active mask segmentation of fluorescence microscope images
NASA Astrophysics Data System (ADS)
Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena
2009-08-01
We propose an active mask segmentation framework that combines the advantages of statistical modeling, smoothing, speed and flexibility offered by the traditional methods of region-growing, multiscale, multiresolution and active contours respectively. At the crux of this framework is a paradigm shift from evolving contours in the continuous domain to evolving multiple masks in the discrete domain. Thus, the active mask framework is particularly suited to segment digital images. We demonstrate the use of the framework in practice through the segmentation of punctate patterns in fluorescence microscope images. Experiments reveal that statistical modeling helps the multiple masks converge from a random initial configuration to a meaningful one. This obviates the need for an involved initialization procedure germane to most of the traditional methods used to segment fluorescence microscope images. While we provide the mathematical details of the functions used to segment fluorescence microscope images, this is only an instantiation of the active mask framework. We suggest some other instantiations of the framework to segment different types of images.
A fast and efficient segmentation scheme for cell microscopic image.
Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H
2007-04-27
Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.
NASA Astrophysics Data System (ADS)
Amanda, A. R.; Widita, R.
2016-03-01
The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.
Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex
2012-01-01
Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421
NASA Astrophysics Data System (ADS)
Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang
2018-05-01
In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.
Segmentation of white rat sperm image
NASA Astrophysics Data System (ADS)
Bai, Weiguo; Liu, Jianguo; Chen, Guoyuan
2011-11-01
The segmentation of sperm image exerts a profound influence in the analysis of sperm morphology, which plays a significant role in the research of animals' infertility and reproduction. To overcome the microscope image's properties of low contrast and highly polluted noise, and to get better segmentation results of sperm image, this paper presents a multi-scale gradient operator combined with a multi-structuring element for the micro-spermatozoa image of white rat, as the multi-scale gradient operator can smooth the noise of an image, while the multi-structuring element can retain more shape details of the sperms. Then, we use the Otsu method to segment the modified gradient image whose gray scale processed is strong in sperms and weak in the background, converting it into a binary sperm image. As the obtained binary image owns impurities that are not similar with sperms in the shape, we choose a form factor to filter those objects whose form factor value is larger than the select critical value, and retain those objects whose not. And then, we can get the final binary image of the segmented sperms. The experiment shows this method's great advantage in the segmentation of the micro-spermatozoa image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Li; Gao, Yaozong; Shi, Feng
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segmentmore » CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.« less
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Fei, Baowei
2013-11-01
An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
Automated tissue segmentation of MR brain images in the presence of white matter lesions.
Valverde, Sergi; Oliver, Arnau; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Lladó, Xavier
2017-01-01
Over the last few years, the increasing interest in brain tissue volume measurements on clinical settings has led to the development of a wide number of automated tissue segmentation methods. However, white matter lesions are known to reduce the performance of automated tissue segmentation methods, which requires manual annotation of the lesions and refilling them before segmentation, which is tedious and time-consuming. Here, we propose a new, fully automated T1-w/FLAIR tissue segmentation approach designed to deal with images in the presence of WM lesions. This approach integrates a robust partial volume tissue segmentation with WM outlier rejection and filling, combining intensity and probabilistic and morphological prior maps. We evaluate the performance of this method on the MRBrainS13 tissue segmentation challenge database, which contains images with vascular WM lesions, and also on a set of Multiple Sclerosis (MS) patient images. On both databases, we validate the performance of our method with other state-of-the-art techniques. On the MRBrainS13 data, the presented approach was at the time of submission the best ranked unsupervised intensity model method of the challenge (7th position) and clearly outperformed the other unsupervised pipelines such as FAST and SPM12. On MS data, the differences in tissue segmentation between the images segmented with our method and the same images where manual expert annotations were used to refill lesions on T1-w images before segmentation were lower or similar to the best state-of-the-art pipeline incorporating automated lesion segmentation and filling. Our results show that the proposed pipeline achieved very competitive results on both vascular and MS lesions. A public version of this approach is available to download for the neuro-imaging community. Copyright © 2016 Elsevier B.V. All rights reserved.
A hybrid approach of using symmetry technique for brain tumor segmentation.
Saddique, Mubbashar; Kazmi, Jawad Haider; Qureshi, Kalim
2014-01-01
Tumor and related abnormalities are a major cause of disability and death worldwide. Magnetic resonance imaging (MRI) is a superior modality due to its noninvasiveness and high quality images of both the soft tissues and bones. In this paper we present two hybrid segmentation techniques and their results are compared with well-recognized techniques in this area. The first technique is based on symmetry and we call it a hybrid algorithm using symmetry and active contour (HASA). In HASA, we take refection image, calculate the difference image, and then apply the active contour on the difference image to segment the tumor. To avoid unimportant segmented regions, we improve the results by proposing an enhancement in the form of the second technique, EHASA. In EHASA, we also take reflection of the original image, calculate the difference image, and then change this image into a binary image. This binary image is mapped onto the original image followed by the application of active contouring to segment the tumor region.
Iterative deep convolutional encoder-decoder network for medical image segmentation.
Jung Uk Kim; Hak Gu Kim; Yong Man Ro
2017-07-01
In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.
Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images
NASA Astrophysics Data System (ADS)
Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis
2018-01-01
Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.
A., Javadpour; A., Mohammadi
2016-01-01
Background Regarding the importance of right diagnosis in medical applications, various methods have been exploited for processing medical images solar. The method of segmentation is used to analyze anal to miscall structures in medical imaging. Objective This study describes a new method for brain Magnetic Resonance Image (MRI) segmentation via a novel algorithm based on genetic and regional growth. Methods Among medical imaging methods, brains MRI segmentation is important due to high contrast of non-intrusive soft tissue and high spatial resolution. Size variations of brain tissues are often accompanied by various diseases such as Alzheimer’s disease. As our knowledge about the relation between various brain diseases and deviation of brain anatomy increases, MRI segmentation is exploited as the first step in early diagnosis. In this paper, regional growth method and auto-mate selection of initial points by genetic algorithm is used to introduce a new method for MRI segmentation. Primary pixels and similarity criterion are automatically by genetic algorithms to maximize the accuracy and validity in image segmentation. Results By using genetic algorithms and defining the fixed function of image segmentation, the initial points for the algorithm were found. The proposed algorithms are applied to the images and results are manually selected by regional growth in which the initial points were compared. The results showed that the proposed algorithm could reduce segmentation error effectively. Conclusion The study concluded that the proposed algorithm could reduce segmentation error effectively and help us to diagnose brain diseases. PMID:27672629
A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks
Wang, Changjian; Liu, Xiaohui; Jin, Shiyao
2018-01-01
Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neural networks. The traditional methods use the artificial image features to complete the task without large amounts of labeled data. Meanwhile, the methods based on deep neural networks can extract the image features effectively without the artificial design, but lots of training data are required. Combined with the advantages of them, this paper presents a composite model of wound segmentation. The model uses the skin with wound detection algorithm we designed in the paper to highlight image features. Then, the preprocessed images are segmented by deep neural networks. And semantic corrections are applied to the segmentation results at last. The model shows a good performance in our experiment. PMID:29955227
A general system for automatic biomedical image segmentation using intensity neighborhoods.
Chen, Cheng; Ozolek, John A; Wang, Wei; Rohde, Gustavo K
2011-01-01
Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before being used in a different application. We describe an approach that, with few modifications, can be used in a variety of image segmentation problems. The approach is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. We describe methods for modeling rotations and variations in scales as well as a subset selection for training the classifiers. We show that the performance of our approach in tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar to or better than several algorithms specifically designed for each of these applications.
An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.
Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong
2014-08-01
Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.
3S2: Behavioral Response Studies of Cetaceans to Navy Sonar Signals in Norwegian Waters
2015-09-30
long-finned pilot (Globicephala melas ), and sperm whales (Physeter macrocephalus) to naval sonar. Aquatic Mammals 38: 362-401. Moretti, D., Thomas, L...2014). The social context of individual foraging behaviour in long-finned pilot whales (Globicephala melas ). Behaviour 151: 1453-1477. DOI: 10.1163...response thresholds for avoidance of sonar by free-ranging long-finned pilot whales (Globicephala melas ). Mar. Poll. Bull.83: 165-180. DOI: 10.1016
3S2: Behavioral Response Studies of Cetaceans to Navy Sonar Signals in Norwegian Waters
2015-09-30
experimental exposures of killer (Orcinus orca), long-finned pilot (Globicephala melas ), and sperm whales (Physeter macrocephalus) to naval sonar. Aquatic...Kvadsheim P.H., Huisman J. and Tyack P.L. (2014). The social context of individual foraging behaviour in long-finned pilot whales (Globicephala melas ...Wensveen P.J., Miller P. J. O. (2014). High response thresholds for avoidance of sonar by free-ranging long-finned pilot whales (Globicephala melas
2010-09-30
proposal include: 1) complete the development of second-generation sonar boards, 2) complete the integration of new transducers with the second... sonar board and transducer. APPROACH Over the last 40 years, there has been significant research effort directed towards the use of high...narrowband frequency, and some AUVs carry single-frequency sidescan sonars (and this technology has been adapted for gliders), the lack of suitable
Characteristics and Use of a Parametric End-Fired Array for Acoustics in Air
2007-03-01
as a sonar application for underwater use. The vast majority of the research for parametric arrays was devoted to underwater applications until the...and also for the calibration of hydrophones and receivers for wide band sonar . All of the researchers mentioned above mainly focused their efforts on...features, which include very high directivity at low frequencies without unwanted side lobes. They are generally used as a wide band sonar system
Seismic Interface Waves in Coastal Waters: A Review
1980-11-15
Being at the low- 4 frequency end of classical sonar activity and at the high-frequency end of seismic research, the propagation of infrasonic energy...water areas. Certainly this and other seismic detection methods will never replace the highly-developed sonar techniques but in coastal waters they...for many sonar purposes [5, 85 to 90) shows that very simple bottom models may already be sufficient to make allowance for the influence of the sea
Advanced Unmanned Search System (AUSS) Performance Analysis
1979-07-15
interference (from thrusters , flow noise , etc.) with sonar data; (4) Sonar range scales can be adjusted, on scene, for viewing the same contacts with...intact. The H-bomb search was performed at 2000 feet, the sub- marine search at 8400 feet. An additional submarine search was selected at 20,000 feet to...Sonar Targets," by Stephen Miller, Marine Physical Laboratory, Scripps Institution of Oceanography, January 1977. 10 Table 2. Baseline towed system
Sonar surveys used in gas-storage cavern analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossley, N.G.
1998-05-04
Natural-gas storage cavern internal configuration, inspection information, and cavern integrity data can be obtained during high-pressure operations with specialized gas-sonar survey logging techniques. TransGas Ltd., Regina, Sask., has successfully performed these operations on several of its deepest and highest pressurized caverns. The data can determine gas-in-place inventory and assess changes in spatial volumes. These changes can result from cavern creep, shrinkage, or closure or from various downhole abnormalities such as fluid infill or collapse of the sidewall or roof. The paper discusses conventional surveys with sonar, running surveys in pressurized caverns, accuracy of the sonar survey, initial development of Cavernmore » 5, a roof fall, Cavern 4 development, and a damaged string.« less
Correction tool for Active Shape Model based lumbar muscle segmentation.
Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio
2015-08-01
In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.
Shahedi, Maysam; Halicek, Martin; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei
2018-06-01
Prostate segmentation in computed tomography (CT) images is useful for treatment planning and procedure guidance such as external beam radiotherapy and brachytherapy. However, because of the low, soft tissue contrast of CT images, manual segmentation of the prostate is a time-consuming task with high interobserver variation. In this study, we proposed a semiautomated, three-dimensional (3D) segmentation for prostate CT images using shape and texture analysis and we evaluated the method against manual reference segmentations. The prostate gland usually has a globular shape with a smoothly curved surface, and its shape could be accurately modeled or reconstructed having a limited number of well-distributed surface points. In a training dataset, using the prostate gland centroid point as the origin of a coordination system, we defined an intersubject correspondence between the prostate surface points based on the spherical coordinates. We applied this correspondence to generate a point distribution model for prostate shape using principal component analysis and to study the local texture difference between prostate and nonprostate tissue close to the different prostate surface subregions. We used the learned shape and texture characteristics of the prostate in CT images and then combined them with user inputs to segment a new image. We trained our segmentation algorithm using 23 CT images and tested the algorithm on two sets of 10 nonbrachytherapy and 37 postlow dose rate brachytherapy CT images. We used a set of error metrics to evaluate the segmentation results using two experts' manual reference segmentations. For both nonbrachytherapy and post-brachytherapy image sets, the average measured Dice similarity coefficient (DSC) was 88% and the average mean absolute distance (MAD) was 1.9 mm. The average measured differences between the two experts on both datasets were 92% (DSC) and 1.1 mm (MAD). The proposed, semiautomatic segmentation algorithm showed a fast, robust, and accurate performance for 3D prostate segmentation of CT images, specifically when no previous, intrapatient information, that is, previously segmented images, was available. The accuracy of the algorithm is comparable to the best performance results reported in the literature and approaches the interexpert variability observed in manual segmentation. © 2018 American Association of Physicists in Medicine.
Contextually guided very-high-resolution imagery classification with semantic segments
NASA Astrophysics Data System (ADS)
Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.
2017-10-01
Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).
Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.
2011-01-01
We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273
Molar axis estimation from computed tomography images.
Dongxia Zhang; Yangzhou Gan; Zeyang Xia; Xinwen Zhou; Shoubin Liu; Jing Xiong; Guanglin Li
2016-08-01
Estimation of tooth axis is needed for some clinical dental treatment. Existing methods require to segment the tooth volume from Computed Tomography (CT) images, and then estimate the axis from the tooth volume. However, they may fail during estimating molar axis due to that the tooth segmentation from CT images is challenging and current segmentation methods may get poor segmentation results especially for these molars with angle which will result in the failure of axis estimation. To resolve this problem, this paper proposes a new method for molar axis estimation from CT images. The key innovation point is that: instead of estimating the 3D axis of each molar from the segmented volume, the method estimates the 3D axis from two projection images. The method includes three steps. (1) The 3D images of each molar are projected to two 2D image planes. (2) The molar contour are segmented and the contour's 2D axis are extracted in each 2D projection image. Principal Component Analysis (PCA) and a modified symmetry axis detection algorithm are employed to extract the 2D axis from the segmented molar contour. (3) A 3D molar axis is obtained by combining the two 2D axes. Experimental results verified that the proposed method was effective to estimate the axis of molar from CT images.
Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation
Maji, Pradipta; Roy, Shaswati
2015-01-01
Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. PMID:25848961
The Impact of Manual Segmentation of CT Images on Monte Carlo Based Skeletal Dosimetry
NASA Astrophysics Data System (ADS)
Frederick, Steve; Jokisch, Derek; Bolch, Wesley; Shah, Amish; Brindle, Jim; Patton, Phillip; Wyler, J. S.
2004-11-01
Radiation doses to the skeleton from internal emitters are of importance in both protection of radiation workers and patients undergoing radionuclide therapies. Improved dose estimates involve obtaining two sets of medical images. The first image provides the macroscopic boundaries (spongiosa volume and cortical shell) of the individual skeletal sites. A second, higher resolution image of the spongiosa microstructure is also obtained. These image sets then provide the geometry for a Monte Carlo radiation transport code. Manual segmentation of the first image is required in order to provide the macrostructural data. For this study, multiple segmentations of the same CT image were performed by multiple individuals. The segmentations were then used in the transport code and the results compared in order to determine the impact of differing segmentations on the skeletal doses. This work has provided guidance on the extent of training required of the manual segmenters. (This work was supported by a grant from the National Institute of Health.)
NASA Technical Reports Server (NTRS)
Shekhar, R.; Cothren, R. M.; Vince, D. G.; Chandra, S.; Thomas, J. D.; Cornhill, J. F.
1999-01-01
Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66). The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, X; Gao, H; Sharp, G
2015-06-15
Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to eachmore » chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions
Huang, Shiqi; Huang, Wenzhun; Zhang, Ting
2016-01-01
The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application. PMID:27924935
Hanson, Erik A; Lundervold, Arvid
2013-11-01
Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.
A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions.
Huang, Shiqi; Huang, Wenzhun; Zhang, Ting
2016-12-07
The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application.
NASA Astrophysics Data System (ADS)
Ojeda, G. Y.; Gayes, P. T.; van Dolah, R. F.; Schwab, W. C.
2002-12-01
Assessment of the extent and variability of benthic habitats is an important mission of biologists and marine scientists, and has supreme relevance in monitoring and maintaining the offshore resources of coastal nations. Mapping `hard bottoms', in particular, is of critical importance because these are the areas that support sessile benthic habitats and associated fisheries. To quantify the extent and distribution of habitats offshore northern South Carolina, we used a spatially quantitative approach that involved textural analysis of side scan sonar images and training of an artificial neural network classifier. This approach was applied to a 2 m-pixel image mosaic of sonar data collected by the USGS in 1999 and 2000. The entire mosaic covered some 686 km2 and extended between the ~6 m and ~10+ m isobaths off the Grand Strand region of South Carolina. Bottom video transects across selected sites provided 2,119 point observations which were used for image-to-ground control as well as training of the neural network classifier. A sensitivity study of 52 space-domain textural features indicated that 12 of them provided reasonable discriminating power between two end-member bottom types: hard bottom and sand. The selected features were calculated over 5 by 5 pixel windows of the image where video point observations existed. These feature vectors were then fed to a 3-layer neural network classifier, trained with a Levenberg-Marquardt backpropagation algorithm. Registration and display of the output habitat map were performed in GIS. Results of our classification indicate that outcropping Tertiary and Cretaceous strata are exposed over a significant portion of northern South Carolina's inner shelf, consistent with a sediment-starved margin type. The combined surface extent classified as hard bottom was 405 km2 -or 59 % of the imaged area-, while only 281 km2 -or 41 % of the area were classified as sand. In addition, our results provided constraints on the spatial continuity of nearshore benthic habitats. The median surface area of the regions classified as hard bottom (n= 190,521) and sand (n= 234,946) were both equal to the output cell size (100 m2), confirming the `patchy' nature of these habitats and suggesting that these medians probably represent upper bounds rather than estimates of the typical extent of individual patches. Furthermore, comparison of the interpretive habitat map with available swath bathymetry data suggests positive correlation between bathymetry `highs' and the major sandy-bottom areas interpreted with our routine. In contrast, the location of hard bottom areas does not appear to be significantly correlated with major bathymetric features. Our findings are in agreement with published qualitative estimates of hard bottom areas on neighboring North Carolina's inner shelf.
A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa
2016-01-01
On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on‐board MR‐IGRT system. PACS number(s): 87.57.nm, 87.57.N‐, 87.61.Tg
Brain Tumor Image Segmentation in MRI Image
NASA Astrophysics Data System (ADS)
Peni Agustin Tjahyaningtijas, Hapsari
2018-04-01
Brain tumor segmentation plays an important role in medical image processing. Treatment of patients with brain tumors is highly dependent on early detection of these tumors. Early detection of brain tumors will improve the patient’s life chances. Diagnosis of brain tumors by experts usually use a manual segmentation that is difficult and time consuming because of the necessary automatic segmentation. Nowadays automatic segmentation is very populer and can be a solution to the problem of tumor brain segmentation with better performance. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. this paper, we focus on the recent trend of automatic segmentation in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of full automatic segmentaion are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.
Vessel segmentation in 4D arterial spin labeling magnetic resonance angiography images of the brain
NASA Astrophysics Data System (ADS)
Phellan, Renzo; Lindner, Thomas; Falcão, Alexandre X.; Forkert, Nils D.
2017-03-01
4D arterial spin labeling magnetic resonance angiography (4D ASL MRA) is a non-invasive and safe modality for cerebrovascular imaging procedures. It uses the patient's magnetically labeled blood as intrinsic contrast agent, so that no external contrast media is required. It provides important 3D structure and blood flow information but a sufficient cerebrovascular segmentation is important since it can help clinicians to analyze and diagnose vascular diseases faster, and with higher confidence as compared to simple visual rating of raw ASL MRA images. This work presents a new method for automatic cerebrovascular segmentation in 4D ASL MRA images of the brain. In this process images are denoised, corresponding image label/control image pairs of the 4D ASL MRA sequences are subtracted, and temporal intensity averaging is used to generate a static representation of the vascular system. After that, sets of vessel and background seeds are extracted and provided as input for the image foresting transform algorithm to segment the vascular system. Four 4D ASL MRA datasets of the brain arteries of healthy subjects and corresponding time-of-flight (TOF) MRA images were available for this preliminary study. For evaluation of the segmentation results of the proposed method, the cerebrovascular system was automatically segmented in the high-resolution TOF MRA images using a validated algorithm and the segmentation results were registered to the 4D ASL datasets. Corresponding segmentation pairs were compared using the Dice similarity coefficient (DSC). On average, a DSC of 0.9025 was achieved, indicating that vessels can be extracted successfully from 4D ASL MRA datasets by the proposed segmentation method.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi
2017-02-01
We have proposed an end-to-end learning approach that trained a deep convolutional neural network (CNN) for automatic CT image segmentation, which accomplished a voxel-wised multiple classification to directly map each voxel on 3D CT images to an anatomical label automatically. The novelties of our proposed method were (1) transforming the anatomical structures segmentation on 3D CT images into a majority voting of the results of 2D semantic image segmentation on a number of 2D-slices from different image orientations, and (2) using "convolution" and "deconvolution" networks to achieve the conventional "coarse recognition" and "fine extraction" functions which were integrated into a compact all-in-one deep CNN for CT image segmentation. The advantage comparing to previous works was its capability to accomplish real-time image segmentations on 2D slices of arbitrary CT-scan-range (e.g. body, chest, abdomen) and produced correspondingly-sized output. In this paper, we propose an improvement of our proposed approach by adding an organ localization module to limit CT image range for training and testing deep CNNs. A database consisting of 240 3D CT scans and a human annotated ground truth was used for training (228 cases) and testing (the remaining 12 cases). We applied the improved method to segment pancreas and left kidney regions, respectively. The preliminary results showed that the accuracies of the segmentation results were improved significantly (pancreas was 34% and kidney was 8% increased in Jaccard index from our previous results). The effectiveness and usefulness of proposed improvement for CT image segmentations were confirmed.
Hybrid active contour model for inhomogeneous image segmentation with background estimation
NASA Astrophysics Data System (ADS)
Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun
2018-03-01
This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.
Survey statistics of automated segmentations applied to optical imaging of mammalian cells.
Bajcsy, Peter; Cardone, Antonio; Chalfoun, Joe; Halter, Michael; Juba, Derek; Kociolek, Marcin; Majurski, Michael; Peskin, Adele; Simon, Carl; Simon, Mylene; Vandecreme, Antoine; Brady, Mary
2015-10-15
The goal of this survey paper is to overview cellular measurements using optical microscopy imaging followed by automated image segmentation. The cellular measurements of primary interest are taken from mammalian cells and their components. They are denoted as two- or three-dimensional (2D or 3D) image objects of biological interest. In our applications, such cellular measurements are important for understanding cell phenomena, such as cell counts, cell-scaffold interactions, cell colony growth rates, or cell pluripotency stability, as well as for establishing quality metrics for stem cell therapies. In this context, this survey paper is focused on automated segmentation as a software-based measurement leading to quantitative cellular measurements. We define the scope of this survey and a classification schema first. Next, all found and manually filteredpublications are classified according to the main categories: (1) objects of interests (or objects to be segmented), (2) imaging modalities, (3) digital data axes, (4) segmentation algorithms, (5) segmentation evaluations, (6) computational hardware platforms used for segmentation acceleration, and (7) object (cellular) measurements. Finally, all classified papers are converted programmatically into a set of hyperlinked web pages with occurrence and co-occurrence statistics of assigned categories. The survey paper presents to a reader: (a) the state-of-the-art overview of published papers about automated segmentation applied to optical microscopy imaging of mammalian cells, (b) a classification of segmentation aspects in the context of cell optical imaging, (c) histogram and co-occurrence summary statistics about cellular measurements, segmentations, segmented objects, segmentation evaluations, and the use of computational platforms for accelerating segmentation execution, and (d) open research problems to pursue. The novel contributions of this survey paper are: (1) a new type of classification of cellular measurements and automated segmentation, (2) statistics about the published literature, and (3) a web hyperlinked interface to classification statistics of the surveyed papers at https://isg.nist.gov/deepzoomweb/resources/survey/index.html.
NASA Technical Reports Server (NTRS)
Stahl, H. Philip (Inventor); Walker, Chanda Bartlett (Inventor)
2006-01-01
An achromatic shearing phase sensor generates an image indicative of at least one measure of alignment between two segments of a segmented telescope's mirrors. An optical grating receives at least a portion of irradiance originating at the segmented telescope in the form of a collimated beam and the collimated beam into a plurality of diffraction orders. Focusing optics separate and focus the diffraction orders. Filtering optics then filter the diffraction orders to generate a resultant set of diffraction orders that are modified. Imaging optics combine portions of the resultant set of diffraction orders to generate an interference pattern that is ultimately imaged by an imager.
Medical image segmentation using genetic algorithms.
Maulik, Ujjwal
2009-03-01
Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.
Applications of magnetic resonance image segmentation in neurology
NASA Astrophysics Data System (ADS)
Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu
1999-05-01
After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.