Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection
NASA Astrophysics Data System (ADS)
Amiri, Ali; Fathy, Mahmood
2010-12-01
This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.
Quantitative three-dimensional transrectal ultrasound (TRUS) for prostate imaging
NASA Astrophysics Data System (ADS)
Pathak, Sayan D.; Aarnink, Rene G.; de la Rosette, Jean J.; Chalana, Vikram; Wijkstra, Hessel; Haynor, David R.; Debruyne, Frans M. J.; Kim, Yongmin
1998-06-01
With the number of men seeking medical care for prostate diseases rising steadily, the need of a fast and accurate prostate boundary detection and volume estimation tool is being increasingly experienced by the clinicians. Currently, these measurements are made manually, which results in a large examination time. A possible solution is to improve the efficiency by automating the boundary detection and volume estimation process with minimal involvement from the human experts. In this paper, we present an algorithm based on SNAKES to detect the boundaries. Our approach is to selectively enhance the contrast along the edges using an algorithm called sticks and integrate it with a SNAKES model. This integrated algorithm requires an initial curve for each ultrasound image to initiate the boundary detection process. We have used different schemes to generate the curves with a varying degree of automation and evaluated its effects on the algorithm performance. After the boundaries are identified, the prostate volume is calculated using planimetric volumetry. We have tested our algorithm on 6 different prostate volumes and compared the performance against the volumes manually measured by 3 experts. With the increase in the user inputs, the algorithm performance improved as expected. The results demonstrate that given an initial contour reasonably close to the prostate boundaries, the algorithm successfully delineates the prostate boundaries in an image, and the resulting volume measurements are in close agreement with those made by the human experts.
A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection
Thounaojam, Dalton Meitei; Khelchandra, Thongam; Singh, Kh. Manglem; Roy, Sudipta
2016-01-01
This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500
MPLNET V3 Cloud and Planetary Boundary Layer Detection
NASA Technical Reports Server (NTRS)
Lewis, Jasper R.; Welton, Ellsworth J.; Campbell, James R.; Haftings, Phillip C.
2016-01-01
The NASA Micropulse Lidar Network Version 3 algorithms for planetary boundary layer and cloud detection are described and differences relative to the previous Version 2 algorithms are highlighted. A year of data from the Goddard Space Flight Center site in Greenbelt, MD consisting of diurnal and seasonal trends is used to demonstrate the results. Both the planetary boundary layer and cloud algorithms show significant improvement of the previous version.
Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng
2015-01-01
Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells. PMID:26066315
Linear segmentation algorithm for detecting layer boundary with lidar.
Mao, Feiyue; Gong, Wei; Logan, Timothy
2013-11-04
The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.
Corner detection and sorting method based on improved Harris algorithm in camera calibration
NASA Astrophysics Data System (ADS)
Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang
2016-11-01
In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.
Parametric boundary reconstruction algorithm for industrial CT metrology application.
Yin, Zhye; Khare, Kedar; De Man, Bruno
2009-01-01
High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.
Zang, Pengxiao; Gao, Simon S; Hwang, Thomas S; Flaxel, Christina J; Wilson, David J; Morrison, John C; Huang, David; Li, Dengwang; Jia, Yali
2017-03-01
To improve optic disc boundary detection and peripapillary retinal layer segmentation, we propose an automated approach for structural and angiographic optical coherence tomography. The algorithm was performed on radial cross-sectional B-scans. The disc boundary was detected by searching for the position of Bruch's membrane opening, and retinal layer boundaries were detected using a dynamic programming-based graph search algorithm on each B-scan without the disc region. A comparison of the disc boundary using our method with that determined by manual delineation showed good accuracy, with an average Dice similarity coefficient ≥0.90 in healthy eyes and eyes with diabetic retinopathy and glaucoma. The layer segmentation accuracy in the same cases was on average less than one pixel (3.13 μm).
Zang, Pengxiao; Gao, Simon S.; Hwang, Thomas S.; Flaxel, Christina J.; Wilson, David J.; Morrison, John C.; Huang, David; Li, Dengwang; Jia, Yali
2017-01-01
To improve optic disc boundary detection and peripapillary retinal layer segmentation, we propose an automated approach for structural and angiographic optical coherence tomography. The algorithm was performed on radial cross-sectional B-scans. The disc boundary was detected by searching for the position of Bruch’s membrane opening, and retinal layer boundaries were detected using a dynamic programming-based graph search algorithm on each B-scan without the disc region. A comparison of the disc boundary using our method with that determined by manual delineation showed good accuracy, with an average Dice similarity coefficient ≥0.90 in healthy eyes and eyes with diabetic retinopathy and glaucoma. The layer segmentation accuracy in the same cases was on average less than one pixel (3.13 μm). PMID:28663830
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassab, A.J.; Pollard, J.E.
An algorithm is presented for the high-resolution detection of irregular-shaped subsurface cavities within irregular-shaped bodies by the IR-CAT method. The theoretical basis of the algorithm is rooted in the solution of an inverse geometric steady-state heat conduction problem. A Cauchy boundary condition is prescribed at the exposed surface, and the inverse geometric heat conduction problem is formulated by specifying the thermal condition at the inner cavities walls, whose unknown geometries are to be detected. The location of the inner cavities is initially estimated, and the domain boundaries are discretized. Linear boundary elements are used in conjunction with cubic splines formore » high resolution of the cavity walls. An anchored grid pattern (AGP) is established to constrain the cubic spline knots that control the inner cavity geometry to evolve along the AGP at each iterative step. A residual is defined measuring the difference between imposed and computed boundary conditions. A Newton-Raphson method with a Broyden update is used to automate the detection of inner cavity walls. During the iterative procedure, the movement of the inner cavity walls is restricted to physically realistic intermediate solutions. Numerical simulation demonstrates the superior resolution of the cubic spline AGP algorithm over the linear spline-based AGP in the detection of an irregular-shaped cavity. Numerical simulation is also used to test the sensitivity of the linear and cubic spline AGP algorithms by simulating bias and random error in measured surface temperature. The proposed AGP algorithm is shown to satisfactorily detect cavities with these simulated data.« less
A density based algorithm to detect cavities and holes from planar points
NASA Astrophysics Data System (ADS)
Zhu, Jie; Sun, Yizhong; Pang, Yueyong
2017-12-01
Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.
Boundary and object detection in real world images. [by means of algorithms
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.
1974-01-01
A solution to the problem of automatic location of objects in digital pictures by computer is presented. A self-scaling local edge detector which can be applied in parallel on a picture is described. Clustering algorithms and boundary following algorithms which are sequential in nature process the edge data to locate images of objects.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases. PMID:24625699
Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.
2011-01-01
Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.
Multimedia systems in ultrasound image boundary detection and measurements
NASA Astrophysics Data System (ADS)
Pathak, Sayan D.; Chalana, Vikram; Kim, Yongmin
1997-05-01
Ultrasound as a medical imaging modality offers the clinician a real-time of the anatomy of the internal organs/tissues, their movement, and flow noninvasively. One of the applications of ultrasound is to monitor fetal growth by measuring biparietal diameter (BPD) and head circumference (HC). We have been working on automatic detection of fetal head boundaries in ultrasound images. These detected boundaries are used to measure BPD and HC. The boundary detection algorithm is based on active contour models and takes 32 seconds on an external high-end workstation, SUN SparcStation 20/71. Our goal has been to make this tool available within an ultrasound machine and at the same time significantly improve its performance utilizing multimedia technology. With the advent of high- performance programmable digital signal processors (DSP), the software solution within an ultrasound machine instead of the traditional hardwired approach or requiring an external computer is now possible. We have integrated our boundary detection algorithm into a programmable ultrasound image processor (PUIP) that fits into a commercial ultrasound machine. The PUIP provides both the high computing power and flexibility needed to support computationally-intensive image processing algorithms within an ultrasound machine. According to our data analysis, BPD/HC measurements made on PUIP lie within the interobserver variability. Hence, the errors in the automated BPD/HC measurements using the algorithm are on the same order as the average interobserver differences. On PUIP, it takes 360 ms to measure the values of BPD/HC on one head image. When processing multiple head images in sequence, it takes 185 ms per image, thus enabling 5.4 BPD/HC measurements per second. Reduction in the overall execution time from 32 seconds to a fraction of a second and making this multimedia system available within an ultrasound machine will help this image processing algorithm and other computer-intensive imaging applications become a practical tool for the sonographers in the feature.
New Graph Models and Algorithms for Detecting Salient Structures from Cluttered Images
2010-02-24
Development of graph models and algorithms to detect boundaries that show certain levels of symmetry, an important geometric property of many...Bookstein. Morphometric tools for landmark data. Cambridge University Press, 1991. [8] F. L. Bookstein. Principal warps: Thin-plate splines and the
Automatic detection of artifacts in converted S3D video
NASA Astrophysics Data System (ADS)
Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail
2014-03-01
In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.
GridMass: a fast two-dimensional feature detection method for LC/MS.
Treviño, Victor; Yañez-Garza, Irma-Luz; Rodriguez-López, Carlos E; Urrea-López, Rafael; Garza-Rodriguez, Maria-Lourdes; Barrera-Saldaña, Hugo-Alberto; Tamez-Peña, José G; Winkler, Robert; Díaz de-la-Garza, Rocío-Isabel
2015-01-01
One of the initial and critical procedures for the analysis of metabolomics data using liquid chromatography and mass spectrometry is feature detection. Feature detection is the process to detect boundaries of the mass surface from raw data. It consists of detected abundances arranged in a two-dimensional (2D) matrix of mass/charge and elution time. MZmine 2 is one of the leading software environments that provide a full analysis pipeline for these data. However, the feature detection algorithms provided in MZmine 2 are based mainly on the analysis of one-dimension at a time. We propose GridMass, an efficient algorithm for 2D feature detection. The algorithm is based on landing probes across the chromatographic space that are moved to find local maxima providing accurate boundary estimations. We tested GridMass on a controlled marker experiment, on plasma samples, on plant fruits, and in a proteome sample. Compared with other algorithms, GridMass is faster and may achieve comparable or better sensitivity and specificity. As a proof of concept, GridMass has been implemented in Java under the MZmine 2 environment and is available at http://www.bioinformatica.mty.itesm.mx/GridMass and MASSyPup. It has also been submitted to the MZmine 2 developing community. Copyright © 2015 John Wiley & Sons, Ltd.
AgRISTARS. Supporting research: Algorithms for scene modelling
NASA Technical Reports Server (NTRS)
Rassbach, M. E. (Principal Investigator)
1982-01-01
The requirements for a comprehensive analysis of LANDSAT or other visual data scenes are defined. The development of a general model of a scene and a computer algorithm for finding the particular model for a given scene is discussed. The modelling system includes a boundary analysis subsystem, which detects all the boundaries and lines in the image and builds a boundary graph; a continuous variation analysis subsystem, which finds gradual variations not well approximated by a boundary structure; and a miscellaneous features analysis, which includes texture, line parallelism, etc. The noise reduction capabilities of this method and its use in image rectification and registration are discussed.
Breast boundary detection with active contours
NASA Astrophysics Data System (ADS)
Balic, I.; Goyal, P.; Roy, O.; Duric, N.
2014-03-01
Ultrasound tomography is a modality that can be used to image various characteristics of the breast, such as sound speed, attenuation, and reflectivity. In the considered setup, the breast is immersed in water and scanned along the coronal axis from the chest wall to the nipple region. To improve image visualization, it is desirable to remove the water background. To this end, the 3D boundary of the breast must be accurately estimated. We present an iterative algorithm based on active contours that automatically detects the boundary of a breast using a 3D stack of attenuation images obtained from an ultrasound tomography scanner. We build upon an existing method to design an algorithm that is fast, fully automated, and reliable. We demonstrate the effectiveness of the proposed technique using clinical data sets.
Ekberg, Peter; Su, Rong; Chang, Ernest W.; Yun, Seok Hyun; Mattsson, Lars
2014-01-01
Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 µm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness. PMID:24562018
NASA Astrophysics Data System (ADS)
Pantazis, Alexandros; Papayannis, Alexandros; Georgoussis, Georgios
2018-04-01
In this paper we present a development of novel algorithms and techniques implemented within the Laser Remote Sensing Laboratory (LRSL) of the National Technical University of Athens (NTUA), in collaboration with Raymetrics S.A., in order to incorporate them into a 3-Dimensional (3D) lidar. The lidar is transmitting at 355 nm in the eye safe region and the measurements then are transposed to the visual range at 550 nm, according to the World Meteorological Organization (WMO) and the International Civil Aviation Organization (ICAO) rules of daytime visibility. These algorithms are able to provide horizontal, slant and vertical visibility for tower aircraft controllers, meteorologists, but also from pilot's point of view. Other algorithms are also provided for detection of atmospheric layering in any given direction and vertical angle, along with the detection of the Planetary Boundary Layer Height (PBLH).
An Automated Cloud-edge Detection Algorithm Using Cloud Physics and Radar Data
NASA Technical Reports Server (NTRS)
Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.
2003-01-01
An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.
NASA Astrophysics Data System (ADS)
Hervo, Maxime; Poltera, Yann; Haefele, Alexander
2016-07-01
Imperfections in a lidar's overlap function lead to artefacts in the background, range and overlap-corrected lidar signals. These artefacts can erroneously be interpreted as an aerosol gradient or, in extreme cases, as a cloud base leading to false cloud detection. A correct specification of the overlap function is hence crucial in the use of automatic elastic lidars (ceilometers) for the detection of the planetary boundary layer or of low cloud. In this study, an algorithm is presented to correct such artefacts. It is based on the assumption of a homogeneous boundary layer and a correct specification of the overlap function down to a minimum range, which must be situated within the boundary layer. The strength of the algorithm lies in a sophisticated quality-check scheme which allows the reliable identification of favourable atmospheric conditions. The algorithm was applied to 2 years of data from a CHM15k ceilometer from the company Lufft. Backscatter signals corrected for background, range and overlap were compared using the overlap function provided by the manufacturer and the one corrected with the presented algorithm. Differences between corrected and uncorrected signals reached up to 45 % in the first 300 m above ground. The amplitude of the correction turned out to be temperature dependent and was larger for higher temperatures. A linear model of the correction as a function of the instrument's internal temperature was derived from the experimental data. Case studies and a statistical analysis of the strongest gradient derived from corrected signals reveal that the temperature model is capable of a high-quality correction of overlap artefacts, in particular those due to diurnal variations. The presented correction method has the potential to significantly improve the detection of the boundary layer with gradient-based methods because it removes false candidates and hence simplifies the attribution of the detected gradients to the planetary boundary layer. A particularly significant benefit can be expected for the detection of shallow stable layers typical of night-time situations. The algorithm is completely automatic and does not require any on-site intervention but requires the definition of an adequate instrument-specific configuration. It is therefore suited for use in large ceilometer networks.
Lung boundary detection in pediatric chest x-rays
NASA Astrophysics Data System (ADS)
Candemir, Sema; Antani, Sameer; Jaeger, Stefan; Browning, Renee; Thoma, George R.
2015-03-01
Tuberculosis (TB) is a major public health problem worldwide, and highly prevalent in developing countries. According to the World Health Organization (WHO), over 95% of TB deaths occur in low- and middle- income countries that often have under-resourced health care systems. In an effort to aid population screening in such resource challenged settings, the U.S. National Library of Medicine has developed a chest X-ray (CXR) screening system that provides a pre-decision on pulmonary abnormalities. When the system is presented with a digital CXR image from the Picture Archive and Communication Systems (PACS) or an imaging source, it automatically identifies the lung regions in the image, extracts image features, and classifies the image as normal or abnormal using trained machine-learning algorithms. The system has been trained on adult CXR images, and this article presents enhancements toward including pediatric CXR images. Our adult lung boundary detection algorithm is model-based. We note the lung shape differences during pediatric developmental stages, and adulthood, and propose building new lung models suitable for pediatric developmental stages. In this study, we quantify changes in lung shape from infancy to adulthood toward enhancing our lung segmentation algorithm. Our initial findings suggest pediatric age groupings of 0 - 23 months, 2 - 10 years, and 11 - 18 years. We present justification for our groupings. We report on the quality of boundary detection algorithm with the pediatric lung models.
NASA Astrophysics Data System (ADS)
Zhu, Zhe
2017-08-01
The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.
Automatic estimation of heart boundaries and cardiothoracic ratio from chest x-ray images
NASA Astrophysics Data System (ADS)
Dallal, Ahmed H.; Agarwal, Chirag; Arbabshirani, Mohammad R.; Patel, Aalpen; Moore, Gregory
2017-03-01
Cardiothoracic ratio (CTR) is a widely used radiographic index to assess heart size on chest X-rays (CXRs). Recent studies have suggested that also two-dimensional CTR might contain clinical information about the heart function. However, manual measurement of such indices is both subjective and time consuming. This study proposes a fast algorithm to automatically estimate CTR indices based on CXRs. The algorithm has three main steps: 1) model based lung segmentation, 2) estimation of heart boundaries from lung contours, and 3) computation of cardiothoracic indices from the estimated boundaries. We extended a previously employed lung detection algorithm to automatically estimate heart boundaries without using ground truth heart markings. We used two datasets: a publicly available dataset with 247 images as well as clinical dataset with 167 studies from Geisinger Health System. The models of lung fields are learned from both datasets. The lung regions in a given test image are estimated by registering the learned models to patient CXRs. Then, heart region is estimated by applying Harris operator on segmented lung fields to detect the corner points corresponding to the heart boundaries. The algorithm calculates three indices, CTR1D, CTR2D, and cardiothoracic area ratio (CTAR). The method was tested on 103 clinical CXRs and average error rates of 7.9%, 25.5%, and 26.4% (for CTR1D, CTR2D, and CTAR respectively) were achieved. The proposed method outperforms previous CTR estimation methods without using any heart templates. This method can have important clinical implications as it can provide fast and accurate estimate of cardiothoracic indices.
Aquino, Arturo; Gegundez-Arias, Manuel Emilio; Marin, Diego
2010-11-01
Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.
Accurate LC peak boundary detection for ¹⁶O/¹⁸O labeled LC-MS data.
Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang S J; Zhang, Jianqiu Michelle
2013-01-01
In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements.
Accurate LC Peak Boundary Detection for 16 O/ 18 O Labeled LC-MS Data
Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang (SJ); Zhang, Jianqiu (Michelle)
2013-01-01
In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements. PMID:24115998
Shot boundary detection and label propagation for spatio-temporal video segmentation
NASA Astrophysics Data System (ADS)
Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David
2015-02-01
This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.
Robust pupil center detection using a curvature algorithm
NASA Technical Reports Server (NTRS)
Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)
1999-01-01
Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.
NASA Astrophysics Data System (ADS)
Sa, Qila; Wang, Zhihui
2018-03-01
At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.
Development of an Algorithm for Satellite Remote Sensing of Sea and Lake Ice
NASA Astrophysics Data System (ADS)
Dorofy, Peter T.
Satellite remote sensing of snow and ice has a long history. The traditional method for many snow and ice detection algorithms has been the use of the Normalized Difference Snow Index (NDSI). This manuscript is composed of two parts. Chapter 1, Development of a Mid-Infrared Sea and Lake Ice Index (MISI) using the GOES Imager, discusses the desirability, development, and implementation of alternative index for an ice detection algorithm, application of the algorithm to the detection of lake ice, and qualitative validation against other ice mapping products; such as, the Ice Mapping System (IMS). Chapter 2, Application of Dynamic Threshold in a Lake Ice Detection Algorithm, continues with a discussion of the development of a method that considers the variable viewing and illumination geometry of observations throughout the day. The method is an alternative to Bidirectional Reflectance Distribution Function (BRDF) models. Evaluation of the performance of the algorithm is introduced by aggregating classified pixels within geometrical boundaries designated by IMS and obtaining sensitivity and specificity statistical measures.
Determination of boundary layer top on the basis of the characteristics of atmospheric particles
NASA Astrophysics Data System (ADS)
Liu, Boming; Ma, Yingying; Gong, Wei; Zhang, Ming; Yang, Jian
2018-04-01
The planetary boundary layer (PBL) is the lowest layer of the atmosphere that can be directly influenced with the Earth's surface. This layer can also respond to surface forcing. The determination of the PBL is significant to environmental and climate research. PBL can also serve as an input parameter for further data processing with atmospheric models. Traditional detection algorithms are susceptible to errors associated with the vertical distribution of aerosol concentrations. To overcome this limitation, a maximum difference search (MDS) algorithm was proposed to calculate the top of the boundary layer based on differences in particle characteristics. The top positions of the PBL from MDS algorithm under different convection states were compared with those from conventional methods. Experimental results demonstrated that the MDS method can determine the top of the boundary layer precisely. The proposed algorithm can also be used to calculate the top of the PBL accurately under weak convection conditions where the traditional methods cannot be applied. Finally, experimental data from June 2015 to December 2015 were analysed to verify the reliability of the MDS algorithm. The correlation coefficients R2 (RMSE) between the results of MDS algorithm and radiosonde measurements were 0.53 (115 m), 0.79 (141 m) and 0.96 (43 m) under weak, moderate and strong convections, respectively. These findings indicated that the proposed method possessed a good feasibility and stability.
A Unified Mathematical Approach to Image Analysis.
1987-08-31
describes four instances of the paradigm in detail. Directions for ongoing and future research are also indicated. Keywords: Image processing; Algorithms; Segmentation; Boundary detection; tomography; Global image analysis .
Coronal Holes and Solar f -Mode Wave Scattering Off Linear Boundaries
NASA Astrophysics Data System (ADS)
Hess Webber, Shea A.
2016-11-01
Coronal holes (CHs) are solar atmospheric features that have reduced emission in the extreme ultraviolet (EUV) spectrum due to decreased plasma density along open magnetic field lines. CHs are the source of the fast solar wind, can influence other solar activity, and track the solar cycle. Our interest in them deals with boundary detection near the solar surface. Detecting CH boundaries is important for estimating their size and tracking their evolution through time, as well as for comparing the physical properties within and outside of the feature. In this thesis, we (1) investigate CHs using statistical properties and image processing techniques on EUV images to detect CH boundaries in the low corona and chromosphere. SOHO/EIT data is used to locate polar CH boundaries on the solar limb, which are then tracked through two solar cycles. Additionally, we develop an edge-detection algorithm that we use on SDO/AIA data of a polar hole extension with an approximately linear boundary. These locations are used later to inform part of the helioseismic investigation; (2) develop a local time-distance (TD) helioseismology technique that can be used to detect CH boundary signatures at the photospheric level. We employ a new averaging scheme that makes use of the quasi-linear topology of elongated scattering regions, and create simulated data to test the new technique and compare results of some associated assumptions. This method enhances the wave propagation signal in the direction perpendicular to the linear feature and reduces the computational time of the TD analysis. We also apply a new statistical analysis of the significance of differences between the TD results; and (3) apply the TD techniques to solar CH data from SDO/HMI. The data correspond to the AIA data used in the edge-detection algorithm on EUV images. We look for statistically significant differences between the TD results inside and outside the CH region. In investigation (1), we found that the polar CH areas did not change significantly between minima, even though the magnetic field strength weakened. The results of (2) indicate that TD helioseismology techniques can be extended to make use of feature symmetry in the domain. The linear technique used here produces results that differ between a linear scattering region and a circular scattering region, shown using the simulated data algorithm. This suggests that using usual TD methods on scattering regions that are radially asymmetric may produce results with signatures of the anisotropy. The results of (1) and (3) indicate that the TD signal within our CH is statistically significantly different compared to unrelated quiet sun results. Surprisingly, the TD results in the quiet sun near the CH boundary also show significant differences compared to the separate quiet sun.
Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study.
Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Takai, Yoshihiro; Yoshizawa, Makoto
2015-05-01
To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the tracking result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors' proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors' algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.
Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaoyong, E-mail: xiaoyong@ieee.org; Homma, Noriyasu, E-mail: homma@ieee.org; Ichiji, Kei, E-mail: ichiji@yoshizawa.ecei.tohoku.ac.jp
2015-05-15
Purpose: To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. Methods: A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the trackingmore » result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. Results: For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors’ proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. Conclusions: In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors’ algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.« less
NASA Astrophysics Data System (ADS)
Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.
2015-09-01
We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.
A New Algorithm for Detecting Cloud Height using OMPS/LP Measurements
NASA Technical Reports Server (NTRS)
Chen, Zhong; DeLand, Matthew; Bhartia, Pawan K.
2016-01-01
The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements.
Autonomous navigation method for substation inspection robot based on travelling deviation
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Xu, Wei; Li, Jian; Fu, Chongguang; Zhou, Hao; Zhang, Chuanyou; Shao, Guangting
2017-06-01
A new method of edge detection is proposed in substation environment, which can realize the autonomous navigation of the substation inspection robot. First of all, the road image and information are obtained by using an image acquisition device. Secondly, the noise in the region of interest which is selected in the road image, is removed with the digital image processing algorithm, the road edge is extracted by Canny operator, and the road boundaries are extracted by Hough transform. Finally, the distance between the robot and the left and the right boundaries is calculated, and the travelling distance is obtained. The robot's walking route is controlled according to the travel deviation and the preset threshold. Experimental results show that the proposed method can detect the road area in real time, and the algorithm has high accuracy and stable performance.
Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction
Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-01-01
We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu’s segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images. PMID:28515636
Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction.
Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-01-01
We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu's segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images.
Autoregressive statistical pattern recognition algorithms for damage detection in civil structures
NASA Astrophysics Data System (ADS)
Yao, Ruigen; Pakzad, Shamim N.
2012-08-01
Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.
Segmentation of hand radiographs using fast marching methods
NASA Astrophysics Data System (ADS)
Chen, Hong; Novak, Carol L.
2006-03-01
Rheumatoid Arthritis is one of the most common chronic diseases. Joint space width in hand radiographs is evaluated to assess joint damage in order to monitor progression of disease and response to treatment. Manual measurement of joint space width is time-consuming and highly prone to inter- and intra-observer variation. We propose a method for automatic extraction of finger bone boundaries using fast marching methods for quantitative evaluation of joint space width. The proposed algorithm includes two stages: location of hand joints followed by extraction of bone boundaries. By setting the propagation speed of the wave front as a function of image intensity values, the fast marching algorithm extracts the skeleton of the hands, in which each branch corresponds to a finger. The finger joint locations are then determined by using the image gradients along the skeletal branches. In order to extract bone boundaries at joints, the gradient magnitudes are utilized for setting the propagation speed, and the gradient phases are used for discriminating the boundaries of adjacent bones. The bone boundaries are detected by searching for the fastest paths from one side of each joint to the other side. Finally, joint space width is computed based on the extracted upper and lower bone boundaries. The algorithm was evaluated on a test set of 8 two-hand radiographs, including images from healthy patients and from patients suffering from arthritis, gout and psoriasis. Using our method, 97% of 208 joints were accurately located and 89% of 416 bone boundaries were correctly extracted.
First order augmentation to tensor voting for boundary inference and multiscale analysis in 3D.
Tong, Wai-Shun; Tang, Chi-Keung; Mordohai, Philippos; Medioni, Gérard
2004-05-01
Most computer vision applications require the reliable detection of boundaries. In the presence of outliers, missing data, orientation discontinuities, and occlusion, this problem is particularly challenging. We propose to address it by complementing the tensor voting framework, which was limited to second order properties, with first order representation and voting. First order voting fields and a mechanism to vote for 3D surface and volume boundaries and curve endpoints in 3D are defined. Boundary inference is also useful for a second difficult problem in grouping, namely, automatic scale selection. We propose an algorithm that automatically infers the smallest scale that can preserve the finest details. Our algorithm then proceeds with progressively larger scales to ensure continuity where it has not been achieved. Therefore, the proposed approach does not oversmooth features or delay the handling of boundaries and discontinuities until model misfit occurs. The interaction of smooth features, boundaries, and outliers is accommodated by the unified representation, making possible the perceptual organization of data in curves, surfaces, volumes, and their boundaries simultaneously. We present results on a variety of data sets to show the efficacy of the improved formalism.
Toward accurate and fast iris segmentation for iris biometrics.
He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao
2009-09-01
Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.
ADOPT: A tool for automatic detection of tectonic plates at the surface of convection models
NASA Astrophysics Data System (ADS)
Mallard, C.; Jacquet, B.; Coltice, N.
2017-08-01
Mantle convection models with plate-like behavior produce surface structures comparable to Earth's plate boundaries. However, analyzing those structures is a difficult task, since convection models produce, as on Earth, diffuse deformation and elusive plate boundaries. Therefore we present here and share a quantitative tool to identify plate boundaries and produce plate polygon layouts from results of numerical models of convection: Automatic Detection Of Plate Tectonics (ADOPT). This digital tool operates within the free open-source visualization software Paraview. It is based on image segmentation techniques to detect objects. The fundamental algorithm used in ADOPT is the watershed transform. We transform the output of convection models into a topographic map, the crest lines being the regions of deformation (plate boundaries) and the catchment basins being the plate interiors. We propose two generic protocols (the field and the distance methods) that we test against an independent visual detection of plate polygons. We show that ADOPT is effective to identify the smaller plates and to close plate polygons in areas where boundaries are diffuse or elusive. ADOPT allows the export of plate polygons in the standard OGR-GMT format for visualization, modification, and analysis under generic softwares like GMT or GPlates.
Estimation of anomaly location and size using electrical impedance tomography.
Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun; Woo, Eung Je; Cho, Young Gu
2003-01-01
We developed a new algorithm that estimates locations and sizes of anomalies in electrically conducting medium based on electrical impedance tomography (EIT) technique. When only the boundary current and voltage measurements are available, it is not practically feasible to reconstruct accurate high-resolution cross-sectional conductivity or resistivity images of a subject. In this paper, we focus our attention on the estimation of locations and sizes of anomalies with different conductivity values compared with the background tissues. We showed the performance of the algorithm from experimental results using a 32-channel EIT system and saline phantom. With about 1.73% measurement error in boundary current-voltage data, we found that the minimal size (area) of the detectable anomaly is about 0.72% of the size (area) of the phantom. Potential applications include the monitoring of impedance related physiological events and bubble detection in two-phase flow. Since this new algorithm requires neither any forward solver nor time-consuming minimization process, it is fast enough for various real-time applications in medicine and nondestructive testing.
de Castro, Alberto; Sawides, Lucie; Qi, Xiaofeng; Burns, Stephen A
2017-08-20
Retinal imaging with an adaptive optics (AO) system usually requires that the eye be centered and stable relative to the exit pupil of the system. Aberrations are then typically corrected inside a fixed circular pupil. This approach can be restrictive when imaging some subjects, since the pupil may not be round and maintaining a stable head position can be difficult. In this paper, we present an automatic algorithm that relaxes these constraints. An image quality metric is computed for each spot of the Shack-Hartmann image to detect the pupil and its boundary, and the control algorithm is applied only to regions within the subject's pupil. Images on a model eye as well as for five subjects were obtained to show that a system exit pupil larger than the subject's eye pupil could be used for AO retinal imaging without a reduction in image quality. This algorithm automates the task of selecting pupil size. It also may relax constraints on centering the subject's pupil and on the shape of the pupil.
Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application.
Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola
2017-06-06
Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information's relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection.
Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application
Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola
2017-01-01
Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information’s relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection. PMID:28587299
The Edge Detectors Suitable for Retinal OCT Image Segmentation
Yang, Jing; Gao, Qian; Zhou, Sheng
2017-01-01
Retinal layer thickness measurement offers important information for reliable diagnosis of retinal diseases and for the evaluation of disease development and medical treatment responses. This task critically depends on the accurate edge detection of the retinal layers in OCT images. Here, we intended to search for the most suitable edge detectors for the retinal OCT image segmentation task. The three most promising edge detection algorithms were identified in the related literature: Canny edge detector, the two-pass method, and the EdgeFlow technique. The quantitative evaluation results show that the two-pass method outperforms consistently the Canny detector and the EdgeFlow technique in delineating the retinal layer boundaries in the OCT images. In addition, the mean localization deviation metrics show that the two-pass method caused the smallest edge shifting problem. These findings suggest that the two-pass method is the best among the three algorithms for detecting retinal layer boundaries. The overall better performance of Canny and two-pass methods over EdgeFlow technique implies that the OCT images contain more intensity gradient information than texture changes along the retinal layer boundaries. The results will guide our future efforts in the quantitative analysis of retinal OCT images for the effective use of OCT technologies in the field of ophthalmology. PMID:29065594
Assessment of Mixed-Layer Height Estimation from Single-wavelength Ceilometer Profiles.
Knepp, Travis N; Szykman, James J; Long, Russell; Duvall, Rachelle M; Krug, Jonathan; Beaver, Melinda; Cavender, Kevin; Kronmiller, Keith; Wheeler, Michael; Delgado, Ruben; Hoff, Raymond; Berkoff, Timothy; Olson, Erik; Clark, Richard; Wolfe, Daniel; Van Gilst, David; Neil, Doreen
2017-01-01
Differing boundary/mixed-layer height measurement methods were assessed in moderately-polluted and clean environments, with a focus on the Vaisala CL51 ceilometer. This intercomparison was performed as part of ongoing measurements at the Chemistry And Physics of the Atmospheric Boundary Layer Experiment (CAPABLE) site in Hampton, Virginia and during the 2014 Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) field campaign that took place in and around Denver, Colorado. We analyzed CL51 data that were collected via two different methods (BLView software, which applied correction factors, and simple terminal emulation logging) to determine the impact of data collection methodology. Further, we evaluated the STRucture of the ATmosphere (STRAT) algorithm as an open-source alternative to BLView (note that the current work presents an evaluation of the BLView and STRAT algorithms and does not intend to act as a validation of either). Filtering criteria were defined according to the change in mixed-layer height (MLH) distributions for each instrument and algorithm and were applied throughout the analysis to remove high-frequency fluctuations from the MLH retrievals. Of primary interest was determining how the different data-collection methodologies and algorithms compare to each other and to radiosonde-derived boundary-layer heights when deployed as part of a larger instrument network. We determined that data-collection methodology is not as important as the processing algorithm and that much of the algorithm differences might be driven by impacts of local meteorology and precipitation events that pose algorithm difficulties. The results of this study show that a common processing algorithm is necessary for LIght Detection And Ranging (LIDAR)-based MLH intercomparisons, and ceilometer-network operation and that sonde-derived boundary layer heights are higher (10-15% at mid-day) than LIDAR-derived mixed-layer heights. We show that averaging the retrieved MLH to 1-hour resolution (an appropriate time scale for a priori data model initialization) significantly improved correlation between differing instruments and differing algorithms.
A color gamut description algorithm for liquid crystal displays in CIELAB space.
Sun, Bangyong; Liu, Han; Li, Wenli; Zhou, Shisheng
2014-01-01
Because the accuracy of gamut boundary description is significant for gamut mapping process, a gamut boundary calculating method for LCD monitors is proposed in this paper. Within most of the previous gamut boundary calculation algorithms, the gamut boundary is calculated in CIELAB space directly, and part of inside-gamut points are mistaken for the boundary points. While, in the new proposed algorithm, the points on the surface of RGB cube are selected as the boundary points, and then converted and described in CIELAB color space. Thus, in our algorithm, the true gamut boundary points are found and a more accurate gamut boundary is described. In experiment, a Toshiba LCD monitor's 3D CIELAB gamut for evaluation is firstly described which has regular-shaped outer surface, and then two 2D gamut boundaries ( CIE-a*b* boundary and CIE-C*L* boundary) are calculated which are often used in gamut mapping process. When our algorithm is compared with several famous gamut calculating algorithms, the gamut volumes are very close, which indicates that our algorithm's accuracy is precise and acceptable.
A Color Gamut Description Algorithm for Liquid Crystal Displays in CIELAB Space
Sun, Bangyong; Liu, Han; Li, Wenli; Zhou, Shisheng
2014-01-01
Because the accuracy of gamut boundary description is significant for gamut mapping process, a gamut boundary calculating method for LCD monitors is proposed in this paper. Within most of the previous gamut boundary calculation algorithms, the gamut boundary is calculated in CIELAB space directly, and part of inside-gamut points are mistaken for the boundary points. While, in the new proposed algorithm, the points on the surface of RGB cube are selected as the boundary points, and then converted and described in CIELAB color space. Thus, in our algorithm, the true gamut boundary points are found and a more accurate gamut boundary is described. In experiment, a Toshiba LCD monitor's 3D CIELAB gamut for evaluation is firstly described which has regular-shaped outer surface, and then two 2D gamut boundaries ( CIE-a*b* boundary and CIE-C*L* boundary) are calculated which are often used in gamut mapping process. When our algorithm is compared with several famous gamut calculating algorithms, the gamut volumes are very close, which indicates that our algorithm's accuracy is precise and acceptable. PMID:24892068
Baca, A
1996-04-01
A method has been developed for the precise determination of anthropometric dimensions from the video images of four different body configurations. High precision is achieved by incorporating techniques for finding the location of object boundaries with sub-pixel accuracy, the implementation of calibration algorithms, and by taking into account the varying distances of the body segments from the recording camera. The system allows automatic segment boundary identification from the video image, if the boundaries are marked on the subject by black ribbons. In connection with the mathematical finite-mass-element segment model of Hatze, body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers etc.) can be computed by using the anthropometric data determined videometrically as input data. Compared to other, recently published video-based systems for the estimation of the inertial properties of body segments, the present algorithms reduce errors originating from optical distortions, inaccurate edge-detection procedures, and user-specified upper and lower segment boundaries or threshold levels for the edge-detection. The video-based estimation of human body segment parameters is especially useful in situations where ease of application and rapid availability of comparatively precise parameter values are of importance.
Leveraging disjoint communities for detecting overlapping community structure
NASA Astrophysics Data System (ADS)
Chakraborty, Tanmoy
2015-05-01
Network communities represent mesoscopic structure for understanding the organization of real-world networks, where nodes often belong to multiple communities and form overlapping community structure in the network. Due to non-triviality in finding the exact boundary of such overlapping communities, this problem has become challenging, and therefore huge effort has been devoted to detect overlapping communities from the network. In this paper, we present PVOC (Permanence based Vertex-replication algorithm for Overlapping Community detection), a two-stage framework to detect overlapping community structure. We build on a novel observation that non-overlapping community structure detected by a standard disjoint community detection algorithm from a network has high resemblance with its actual overlapping community structure, except the overlapping part. Based on this observation, we posit that there is perhaps no need of building yet another overlapping community finding algorithm; but one can efficiently manipulate the output of any existing disjoint community finding algorithm to obtain the required overlapping structure. We propose a new post-processing technique that by combining with any existing disjoint community detection algorithm, can suitably process each vertex using a new vertex-based metric, called permanence, and thereby finds out overlapping candidates with their community memberships. Experimental results on both synthetic and large real-world networks show that PVOC significantly outperforms six state-of-the-art overlapping community detection algorithms in terms of high similarity of the output with the ground-truth structure. Thus our framework not only finds meaningful overlapping communities from the network, but also allows us to put an end to the constant effort of building yet another overlapping community detection algorithm.
NASA Astrophysics Data System (ADS)
Chen, Xueli; Yang, Defu; Qu, Xiaochao; Hu, Hao; Liang, Jimin; Gao, Xinbo; Tian, Jie
2012-06-01
Bioluminescence tomography (BLT) has been successfully applied to the detection and therapeutic evaluation of solid cancers. However, the existing BLT reconstruction algorithms are not accurate enough for cavity cancer detection because of neglecting the void problem. Motivated by the ability of the hybrid radiosity-diffusion model (HRDM) in describing the light propagation in cavity organs, an HRDM-based BLT reconstruction algorithm was provided for the specific problem of cavity cancer detection. HRDM has been applied to optical tomography but is limited to simple and regular geometries because of the complexity in coupling the boundary between the scattering and void region. In the provided algorithm, HRDM was first applied to three-dimensional complicated and irregular geometries and then employed as the forward light transport model to describe the bioluminescent light propagation in tissues. Combining HRDM with the sparse reconstruction strategy, the cavity cancer cells labeled with bioluminescent probes can be more accurately reconstructed. Compared with the diffusion equation based reconstruction algorithm, the essentiality and superiority of the HRDM-based algorithm were demonstrated with simulation, phantom and animal studies. An in vivo gastric cancer-bearing nude mouse experiment was conducted, whose results revealed the ability and feasibility of the HRDM-based algorithm in the biomedical application of gastric cancer detection.
Chen, Xueli; Yang, Defu; Qu, Xiaochao; Hu, Hao; Liang, Jimin; Gao, Xinbo; Tian, Jie
2012-06-01
Bioluminescence tomography (BLT) has been successfully applied to the detection and therapeutic evaluation of solid cancers. However, the existing BLT reconstruction algorithms are not accurate enough for cavity cancer detection because of neglecting the void problem. Motivated by the ability of the hybrid radiosity-diffusion model (HRDM) in describing the light propagation in cavity organs, an HRDM-based BLT reconstruction algorithm was provided for the specific problem of cavity cancer detection. HRDM has been applied to optical tomography but is limited to simple and regular geometries because of the complexity in coupling the boundary between the scattering and void region. In the provided algorithm, HRDM was first applied to three-dimensional complicated and irregular geometries and then employed as the forward light transport model to describe the bioluminescent light propagation in tissues. Combining HRDM with the sparse reconstruction strategy, the cavity cancer cells labeled with bioluminescent probes can be more accurately reconstructed. Compared with the diffusion equation based reconstruction algorithm, the essentiality and superiority of the HRDM-based algorithm were demonstrated with simulation, phantom and animal studies. An in vivo gastric cancer-bearing nude mouse experiment was conducted, whose results revealed the ability and feasibility of the HRDM-based algorithm in the biomedical application of gastric cancer detection.
NASA Technical Reports Server (NTRS)
Madyastha, Raghavendra K.; Aazhang, Behnaam; Henson, Troy F.; Huxhold, Wendy L.
1992-01-01
This paper addresses the issue of applying a globally convergent optimization algorithm to the training of multilayer perceptrons, a class of Artificial Neural Networks. The multilayer perceptrons are trained towards the solution of two highly nonlinear problems: (1) signal detection in a multi-user communication network, and (2) solving the inverse kinematics for a robotic manipulator. The research is motivated by the fact that a multilayer perceptron is theoretically capable of approximating any nonlinear function to within a specified accuracy. The algorithm that has been employed in this study combines the merits of two well known optimization algorithms, the Conjugate Gradients and the Trust Regions Algorithms. The performance is compared to a widely used algorithm, the Backpropagation Algorithm, that is basically a gradient-based algorithm, and hence, slow in converging. The performances of the two algorithms are compared with the convergence rate. Furthermore, in the case of the signal detection problem, performances are also benchmarked by the decision boundaries drawn as well as the probability of error obtained in either case.
2006-05-15
alarm performance in a cost-effective manner is the use of track - before - detect strategies, in which multiple sensor detections must occur within the...corresponding to the traditional sensor coverage problem. Also, in the track - before - detect context, reference is made to the field-level functions of...detection and false alarm as successful search and false search, respectively, because the track - before - detect process serves as a searching function
Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan
2014-03-01
Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.
Segmentation of ribs in digital chest radiographs
NASA Astrophysics Data System (ADS)
Cong, Lin; Guo, Wei; Li, Qiang
2016-03-01
Ribs and clavicles in posterior-anterior (PA) digital chest radiographs often overlap with lung abnormalities such as nodules, and cause missing of these abnormalities, it is therefore necessary to remove or reduce the ribs in chest radiographs. The purpose of this study was to develop a fully automated algorithm to segment ribs within lung area in digital radiography (DR) for removal of the ribs. The rib segmentation algorithm consists of three steps. Firstly, a radiograph was pre-processed for contrast adjustment and noise removal; second, generalized Hough transform was employed to localize the lower boundary of the ribs. In the third step, a novel bilateral dynamic programming algorithm was used to accurately segment the upper and lower boundaries of ribs simultaneously. The width of the ribs and the smoothness of the rib boundaries were incorporated in the cost function of the bilateral dynamic programming for obtaining consistent results for the upper and lower boundaries. Our database consisted of 93 DR images, including, respectively, 23 and 70 images acquired with a DR system from Shanghai United-Imaging Healthcare Co. and from GE Healthcare Co. The rib localization algorithm achieved a sensitivity of 98.2% with 0.1 false positives per image. The accuracy of the detected ribs was further evaluated subjectively in 3 levels: "1", good; "2", acceptable; "3", poor. The percentages of good, acceptable, and poor segmentation results were 91.1%, 7.2%, and 1.7%, respectively. Our algorithm can obtain good segmentation results for ribs in chest radiography and would be useful for rib reduction in our future study.
A Family of Well-Clear Boundary Models for the Integration of UAS in the NAS
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Narkawicz, Anthony; Chamberlain, James; Consiglio, Maria; Upchurch, Jason
2014-01-01
The FAA-sponsored Sense and Avoid Workshop for Unmanned Aircraft Systems (UAS) defines the concept of sense and avoid for remote pilots as "the capability of a UAS to remain well clear from and avoid collisions with other airborne traffic." Hence, a rigorous definition of well clear is fundamental to any separation assurance concept for the integration of UAS into civil airspace. This paper presents a family of well-clear boundary models based on the TCAS II Resolution Advisory logic. For these models, algorithms that predict well-clear violations along aircraft current trajectories are provided. These algorithms are analogous to conflict detection algorithms but instead of predicting loss of separation, they predict whether well-clear violations will occur during a given lookahead time interval. Analytical techniques are used to study the properties and relationships satisfied by the models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata; Rao, Nageswara S; Wu, Qishi
There have been increasingly large deployments of radiation detection networks that require computationally fast algorithms to produce prompt results over ad-hoc sub-networks of mobile devices, such as smart-phones. These algorithms are in sharp contrast to complex network algorithms that necessitate all measurements to be sent to powerful central servers. In this work, at individual sensors, we employ Wald-statistic based detection algorithms which are computationally very fast, and are implemented as one of three Z-tests and four chi-square tests. At fusion center, we apply the K-out-of-N fusion to combine the sensors hard decisions. We characterize the performance of detection methods bymore » deriving analytical expressions for the distributions of underlying test statistics, and by analyzing the fusion performances in terms of K, N, and the false-alarm rates of individual detectors. We experimentally validate our methods using measurements from indoor and outdoor characterization tests of the Intelligence Radiation Sensors Systems (IRSS) program. In particular, utilizing the outdoor measurements, we construct two important real-life scenarios, boundary surveillance and portal monitoring, and present the results of our algorithms.« less
Wang, Shuihua; Chen, Mengmeng; Li, Yang; Shao, Ying; Zhang, Yudong; Du, Sidan; Wu, Jane
2016-01-01
Dendritic spines are described as neuronal protrusions. The morphology of dendritic spines and dendrites has a strong relationship to its function, as well as playing an important role in understanding brain function. Quantitative analysis of dendrites and dendritic spines is essential to an understanding of the formation and function of the nervous system. However, highly efficient tools for the quantitative analysis of dendrites and dendritic spines are currently undeveloped. In this paper we propose a novel three-step cascaded algorithm-RTSVM- which is composed of ridge detection as the curvature structure identifier for backbone extraction, boundary location based on differences in density, the Hu moment as features and Twin Support Vector Machine (TSVM) classifiers for spine classification. Our data demonstrates that this newly developed algorithm has performed better than other available techniques used to detect accuracy and false alarm rates. This algorithm will be used effectively in neuroscience research.
Superpixel guided active contour segmentation of retinal layers in OCT volumes
NASA Astrophysics Data System (ADS)
Bai, Fangliang; Gibson, Stuart J.; Marques, Manuel J.; Podoleanu, Adrian
2018-03-01
Retinal OCT image segmentation is a precursor to subsequent medical diagnosis by a clinician or machine learning algorithm. In the last decade, many algorithms have been proposed to detect retinal layer boundaries and simplify the image representation. Inspired by the recent success of superpixel methods for pre-processing natural images, we present a novel framework for segmentation of retinal layers in OCT volume data. In our framework, the region of interest (e.g. the fovea) is located using an adaptive-curve method. The cell layer boundaries are then robustly detected firstly using 1D superpixels, applied to A-scans, and then fitting active contours in B-scan images. Thereafter the 3D cell layer surfaces are efficiently segmented from the volume data. The framework was tested on healthy eye data and we show that it is capable of segmenting up to 12 layers. The experimental results imply the effectiveness of proposed method and indicate its robustness to low image resolution and intrinsic speckle noise.
iSentenizer-μ: multilingual sentence boundary detection model.
Wong, Derek F; Chao, Lidia S; Zeng, Xiaodong
2014-01-01
Sentence boundary detection (SBD) system is normally quite sensitive to genres of data that the system is trained on. The genres of data are often referred to the shifts of text topics and new languages domains. Although new detection models can be retrained for different languages or new text genres, previous model has to be thrown away and the creation process has to be restarted from scratch. In this paper, we present a multilingual sentence boundary detection system (iSentenizer-μ) for Danish, German, English, Spanish, Dutch, French, Italian, Portuguese, Greek, Finnish, and Swedish languages. The proposed system is able to detect the sentence boundaries of a mixture of different text genres and languages with high accuracy. We employ i (+)Learning algorithm, an incremental tree learning architecture, for constructing the system. iSentenizer-μ, under the incremental learning framework, is adaptable to text of different topics and Roman-alphabet languages, by merging new data into existing model to learn the new knowledge incrementally by revision instead of retraining. The system has been extensively evaluated on different languages and text genres and has been compared against two state-of-the-art SBD systems, Punkt and MaxEnt. The experimental results show that the proposed system outperforms the other systems on all datasets.
Salient object detection based on discriminative boundary and multiple cues integration
NASA Astrophysics Data System (ADS)
Jiang, Qingzhu; Wu, Zemin; Tian, Chang; Liu, Tao; Zeng, Mingyong; Hu, Lei
2016-01-01
In recent years, many saliency models have achieved good performance by taking the image boundary as the background prior. However, if all boundaries of an image are equally and artificially selected as background, misjudgment may happen when the object touches the boundary. We propose an algorithm called weighted contrast optimization based on discriminative boundary (wCODB). First, a background estimation model is reliably constructed through discriminating each boundary via Hausdorff distance. Second, the background-only weighted contrast is improved by fore-background weighted contrast, which is optimized through weight-adjustable optimization framework. Then to objectively estimate the quality of a saliency map, a simple but effective metric called spatial distribution of saliency map and mean saliency in covered window ratio (MSR) is designed. Finally, in order to further promote the detection result using MSR as the weight, we propose a saliency fusion framework to integrate three other cues-uniqueness, distribution, and coherence from three representative methods into our wCODB model. Extensive experiments on six public datasets demonstrate that our wCODB performs favorably against most of the methods based on boundary, and the integrated result outperforms all state-of-the-art methods.
Assessment of Mixed-Layer Height Estimation from Single-wavelength Ceilometer Profiles
Knepp, Travis N.; Szykman, James J.; Long, Russell; Duvall, Rachelle M.; Krug, Jonathan; Beaver, Melinda; Cavender, Kevin; Kronmiller, Keith; Wheeler, Michael; Delgado, Ruben; Hoff, Raymond; Berkoff, Timothy; Olson, Erik; Clark, Richard; Wolfe, Daniel; Van Gilst, David; Neil, Doreen
2018-01-01
Differing boundary/mixed-layer height measurement methods were assessed in moderately-polluted and clean environments, with a focus on the Vaisala CL51 ceilometer. This intercomparison was performed as part of ongoing measurements at the Chemistry And Physics of the Atmospheric Boundary Layer Experiment (CAPABLE) site in Hampton, Virginia and during the 2014 Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) field campaign that took place in and around Denver, Colorado. We analyzed CL51 data that were collected via two different methods (BLView software, which applied correction factors, and simple terminal emulation logging) to determine the impact of data collection methodology. Further, we evaluated the STRucture of the ATmosphere (STRAT) algorithm as an open-source alternative to BLView (note that the current work presents an evaluation of the BLView and STRAT algorithms and does not intend to act as a validation of either). Filtering criteria were defined according to the change in mixed-layer height (MLH) distributions for each instrument and algorithm and were applied throughout the analysis to remove high-frequency fluctuations from the MLH retrievals. Of primary interest was determining how the different data-collection methodologies and algorithms compare to each other and to radiosonde-derived boundary-layer heights when deployed as part of a larger instrument network. We determined that data-collection methodology is not as important as the processing algorithm and that much of the algorithm differences might be driven by impacts of local meteorology and precipitation events that pose algorithm difficulties. The results of this study show that a common processing algorithm is necessary for LIght Detection And Ranging (LIDAR)-based MLH intercomparisons, and ceilometer-network operation and that sonde-derived boundary layer heights are higher (10–15% at mid-day) than LIDAR-derived mixed-layer heights. We show that averaging the retrieved MLH to 1-hour resolution (an appropriate time scale for a priori data model initialization) significantly improved correlation between differing instruments and differing algorithms. PMID:29682087
Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurugol, Sila, E-mail: sila.kurugol@childrens.harvard.edu; Come, Carolyn E.; Diaz, Alejandro A.
Purpose: The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. Methods: The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearbymore » edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. Results: The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. Conclusions: The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers.« less
Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions.
Kurugol, Sila; Come, Carolyn E; Diaz, Alejandro A; Ross, James C; Kinney, Greg L; Black-Shinn, Jennifer L; Hokanson, John E; Budoff, Matthew J; Washko, George R; San Jose Estepar, Raul
2015-09-01
The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearby edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers.
Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions
Kurugol, Sila; Come, Carolyn E.; Diaz, Alejandro A.; Ross, James C.; Kinney, Greg L.; Black-Shinn, Jennifer L.; Hokanson, John E.; Budoff, Matthew J.; Washko, George R.; San Jose Estepar, Raul
2015-01-01
Purpose: The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. Methods: The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearby edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. Results: The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. Conclusions: The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers. PMID:26328995
Adaptive skin detection based on online training
NASA Astrophysics Data System (ADS)
Zhang, Ming; Tang, Liang; Zhou, Jie; Rong, Gang
2007-11-01
Skin is a widely used cue for porn image classification. Most conventional methods are off-line training schemes. They usually use a fixed boundary to segment skin regions in the images and are effective only in restricted conditions: e.g. good lightness and unique human race. This paper presents an adaptive online training scheme for skin detection which can handle these tough cases. In our approach, skin detection is considered as a classification problem on Gaussian mixture model. For each image, human face is detected and the face color is used to establish a primary estimation of skin color distribution. Then an adaptive online training algorithm is used to find the real boundary between skin color and background color in current image. Experimental results on 450 images showed that the proposed method is more robust in general situations than the conventional ones.
Algorithms for Discovery of Multiple Markov Boundaries
Statnikov, Alexander; Lytkin, Nikita I.; Lemeire, Jan; Aliferis, Constantin F.
2013-01-01
Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multiple Markov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains. PMID:25285052
Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, James C., E-mail: jross@bwh.harvard.edu; Surgical Planning Lab, Brigham and Women's Hospital, Boston, Massachusetts 02215; Laboratory of Mathematics in Imaging, Brigham and Women's Hospital, Boston, Massachusetts 02126
2013-12-15
Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and amore » novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed algorithm is effective for lung lobe segmentation in absence of auxiliary structures such as vessels and airways. The most challenging cases are those with mostly incomplete, absent, or near-absent fissures and in cases with poorly revealed fissures due to high image noise. However, the authors observe good performance even in the majority of these cases.« less
A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions
NASA Astrophysics Data System (ADS)
Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.
2014-01-01
We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.
Dual chamber arrhythmia detection in the implantable cardioverter defibrillator.
Dijkman, B; Wellens, H J
2000-10-01
Dual chamber implantable cardioverter defibrillator (ICD) technology extended ICD therapy to more than termination of hemodynamically unstable ventricular tachyarrhythmias. It created the basis for dual chamber arrhythmia management in which dependable detection is important for treatment and prevention of both ventricular and atrial arrhythmias. Dual chamber detection algorithms were investigated in two Medtronic dual chamber ICDs: the 7250 Jewel AF (33 patients) and the 7271 Gem DR (31 patients). Both ICDs use the same PR Logic algorithm to interpret tachycardia as ventricular tachycardia (VT), supraventricular tachycardia (SVT), or dual (VT+ SVT). The accuracy of dual chamber detection was studied in 310 of 1,367 spontaneously occurring tachycardias in which rate criterion only was not sufficient for arrhythmia diagnosis. In 78 episodes there was a double tachycardia, in 223 episodes SVT was detected in the VT or ventricular fibrillation zone, and in 9 episodes arrhythmia was detected outside the boundaries of the PR Logic functioning. In 100% of double tachycardias the VT was correctly diagnosed and received priority treatment. SVT was seen in 59 (19%) episodes diagnosed as VT. The causes of inappropriate detection were (1) algorithm failure (inability to fulfill the PR
Predicting Loss-of-Control Boundaries Toward a Piloting Aid
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.
NASA Astrophysics Data System (ADS)
Amrute, Junedh M.; Athanasiou, Lambros S.; Rikhtegar, Farhad; de la Torre Hernández, José M.; Camarero, Tamara García; Edelman, Elazer R.
2018-03-01
Polymeric endovascular implants are the next step in minimally invasive vascular interventions. As an alternative to traditional metallic drug-eluting stents, these often-erodible scaffolds present opportunities and challenges for patients and clinicians. Theoretically, as they resorb and are absorbed over time, they obviate the long-term complications of permanent implants, but in the short-term visualization and therefore positioning is problematic. Polymeric scaffolds can only be fully imaged using optical coherence tomography (OCT) imaging-they are relatively invisible via angiography-and segmentation of polymeric struts in OCT images is performed manually, a laborious and intractable procedure for large datasets. Traditional lumen detection methods using implant struts as boundary limits fail in images with polymeric implants. Therefore, it is necessary to develop an automated method to detect polymeric struts and luminal borders in OCT images; we present such a fully automated algorithm. Accuracy was validated using expert annotations on 1140 OCT images with a positive predictive value of 0.93 for strut detection and an R2 correlation coefficient of 0.94 between detected and expert-annotated lumen areas. The proposed algorithm allows for rapid, accurate, and automated detection of polymeric struts and the luminal border in OCT images.
Edge grouping combining boundary and region information.
Stahl, Joachim S; Wang, Song
2007-10-01
This paper introduces a new edge-grouping method to detect perceptually salient structures in noisy images. Specifically, we define a new grouping cost function in a ratio form, where the numerator measures the boundary proximity of the resulting structure and the denominator measures the area of the resulting structure. This area term introduces a preference towards detecting larger-size structures and, therefore, makes the resulting edge grouping more robust to image noise. To find the optimal edge grouping with the minimum grouping cost, we develop a special graph model with two different kinds of edges and then reduce the grouping problem to finding a special kind of cycle in this graph with a minimum cost in ratio form. This optimal cycle-finding problem can be solved in polynomial time by a previously developed graph algorithm. We implement this edge-grouping method, test it on both synthetic data and real images, and compare its performance against several available edge-grouping and edge-linking methods. Furthermore, we discuss several extensions of the proposed method, including the incorporation of the well-known grouping cues of continuity and intensity homogeneity, introducing a factor to balance the contributions from the boundary and region information, and the prevention of detecting self-intersecting boundaries.
Resource sharing on CSMA/CD networks in the presence of noise. M.S. Thesis
NASA Technical Reports Server (NTRS)
Dinschel, Duane Edward
1987-01-01
Resource sharing on carrier sense multiple access with collision detection (CSMA/CD) networks can be accomplished by using window-control algorithms for bus contention. The window-control algorithms are designed to grant permission to transmit to the station with the minimum contention parameter. Proper operation of the window-control algorithm requires that all stations sense the same state of the newtork in each contention slot. Noise causes the state of the network to appear as a collision. False collisions can cause the window-control algorithm to terminate without isolating any stations. A two-phase window-control protocol and approximate recurrence equation with noise as a parameter to improve the performance of the window-control algorithms in the presence of noise are developed. The results are compared through simulation, with the approximate recurrence equation yielding the best overall performance. Noise is even a bigger problem when it is not detected by all stations. In such cases it is possible for the window boundaries of the contending stations to become out of phase. Consequently, it is possible to isolate a station other than the one with the minimum contention parameter. To guarantee proper isolation of the minimum, a broadcast phase must be added after the termination of the algorithm. The protocol required to correct the window-control algorithm when noise is not detected by all stations is discussed.
NASA Astrophysics Data System (ADS)
Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian
2015-09-01
As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.
The algorithm stitching for medical imaging
NASA Astrophysics Data System (ADS)
Semenishchev, E.; Marchuk, V.; Voronin, V.; Pismenskova, M.; Tolstova, I.; Svirin, I.
2016-05-01
In this paper we propose a stitching algorithm of medical images into one. The algorithm is designed to stitching the medical x-ray imaging, biological particles in microscopic images, medical microscopic images and other. Such image can improve the diagnosis accuracy and quality for minimally invasive studies (e.g., laparoscopy, ophthalmology and other). The proposed algorithm is based on the following steps: the searching and selection areas with overlap boundaries; the keypoint and feature detection; the preliminary stitching images and transformation to reduce the visible distortion; the search a single unified borders in overlap area; brightness, contrast and white balance converting; the superimposition into a one image. Experimental results demonstrate the effectiveness of the proposed method in the task of image stitching.
An improved silhouette for human pose estimation
NASA Astrophysics Data System (ADS)
Hawes, Anthony H.; Iftekharuddin, Khan M.
2017-08-01
We propose a novel method for analyzing images that exploits the natural lines of a human poses to find areas where self-occlusion could be present. Errors caused by self-occlusion cause several modern human pose estimation methods to mis-identify body parts, which reduces the performance of most action recognition algorithms. Our method is motivated by the observation that, in several cases, occlusion can be reasoned using only boundary lines of limbs. An intelligent edge detection algorithm based on the above principle could be used to augment the silhouette with information useful for pose estimation algorithms and push forward progress on occlusion handling for human action recognition. The algorithm described is applicable to computer vision scenarios involving 2D images and (appropriated flattened) 3D images.
Computer-aided US diagnosis of breast lesions by using cell-based contour grouping.
Cheng, Jie-Zhi; Chou, Yi-Hong; Huang, Chiun-Sheng; Chang, Yeun-Chung; Tiu, Chui-Mei; Chen, Kuei-Wu; Chen, Chung-Ming
2010-06-01
To develop a computer-aided diagnostic algorithm with automatic boundary delineation for differential diagnosis of benign and malignant breast lesions at ultrasonography (US) and investigate the effect of boundary quality on the performance of a computer-aided diagnostic algorithm. This was an institutional review board-approved retrospective study with waiver of informed consent. A cell-based contour grouping (CBCG) segmentation algorithm was used to delineate the lesion boundaries automatically. Seven morphologic features were extracted. The classifier was a logistic regression function. Five hundred twenty breast US scans were obtained from 520 subjects (age range, 15-89 years), including 275 benign (mean size, 15 mm; range, 5-35 mm) and 245 malignant (mean size, 18 mm; range, 8-29 mm) lesions. The newly developed computer-aided diagnostic algorithm was evaluated on the basis of boundary quality and differentiation performance. The segmentation algorithms and features in two conventional computer-aided diagnostic algorithms were used for comparative study. The CBCG-generated boundaries were shown to be comparable with the manually delineated boundaries. The area under the receiver operating characteristic curve (AUC) and differentiation accuracy were 0.968 +/- 0.010 and 93.1% +/- 0.7, respectively, for all 520 breast lesions. At the 5% significance level, the newly developed algorithm was shown to be superior to the use of the boundaries and features of the two conventional computer-aided diagnostic algorithms in terms of AUC (0.974 +/- 0.007 versus 0.890 +/- 0.008 and 0.788 +/- 0.024, respectively). The newly developed computer-aided diagnostic algorithm that used a CBCG segmentation method to measure boundaries achieved a high differentiation performance. Copyright RSNA, 2010
Automatic extraction of building boundaries using aerial LiDAR data
NASA Astrophysics Data System (ADS)
Wang, Ruisheng; Hu, Yong; Wu, Huayi; Wang, Jian
2016-01-01
Building extraction is one of the main research topics of the photogrammetry community. This paper presents automatic algorithms for building boundary extractions from aerial LiDAR data. First, segmenting height information generated from LiDAR data, the outer boundaries of aboveground objects are expressed as closed chains of oriented edge pixels. Then, building boundaries are distinguished from nonbuilding ones by evaluating their shapes. The candidate building boundaries are reconstructed as rectangles or regular polygons by applying new algorithms, following the hypothesis verification paradigm. These algorithms include constrained searching in Hough space, enhanced Hough transformation, and the sequential linking technique. The experimental results show that the proposed algorithms successfully extract building boundaries at rates of 97%, 85%, and 92% for three LiDAR datasets with varying scene complexities.
NASA Astrophysics Data System (ADS)
Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip
2018-06-01
Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.
NASA Astrophysics Data System (ADS)
Milroy, Conor; Martucci, Giovanni; O'Dowd, Colin
2010-05-01
The planetary boundary layer (PBL) top height detections have been retrieved by two ceilometers (Vaisala CL31 and Jenoptik CHM15K) and a microwave radiometer (RPG-HATPRO) based at the Mace Head Research station, Ireland, from the 8th to the 28th of June 2009 during the ICOS Mace Head campaign. Characteristic of this region, with warm waters, the marine boundary layer is typically 2-layered with a surface mixed layer (SML) and a decoupled residual or convective layer (DRCL), above which is the free troposphere (Kunz et al. 2002). The PBL data have been analyzed using a newly developed Temporal Height-Tracking (THT) algorithm (Martucci et al., 2010) for automatic detection of the independent SML and DRCL tops. Daily and weekly averages of the PBL data have been performed to smooth out the short term variability and assess the dependence of the PBL depth on different air masses advected over the Mace Head station. Moreover, a qualitative comparison between the ceilometer and radiometer PBL top detected values has been done to assess their consistency.
NASA Astrophysics Data System (ADS)
Dempsey, M. J.; Booth, J.; Arend, M.; Melecio-Vazquez, D.
2016-12-01
The radar wind profiler (RWP) located on the Liberty Science Center in Jersey City, NJ is a part of the New York City Meteorological Network (NYCMetNet). An automatic algorithm based on those by Angevine [1] and Molod [2] is expanded upon and implemented to take RWP signal to noise ratio data and create an urban boundary layer (UBL) height product. Time series of the RWP UBL heights from clear and cloudy days are examined and compared to UBL height time series calculated from thermal data obtained from a NYCMetNet radiometer located on the roof of the Grove School of Engineering at The City College of New York. UBL data from the RWP are also compared to the MERRA (Modern Era Retrospective Analysis for Research and Applications) planetary boundary layer height time series product. A limited seasonal climatology is created from the available RWP data for clear and cloudy days and then compared to a limited seasonal climatology produced from boundary layer data obtained from MERRA and boundary layer data calculated from the CCNY radiometer. As with wind profilers in the NOAA wind profiler network, the signal return to the lowest range gates is not always the result of turbulent scattering, but from scattering from other targets such as the building itself, birds and insects. The algorithm attempts to address this during the daytime, when strong signal returns at the lowest range gates mask the SNR maxima above which are representative of the actual UBL height. Detecting the collapse and fall of the boundary layer meets with limited success, also, from the hours of 2:30pm to 5:00pm. Upper and lower range gates from the wind profiler limit observation of the nighttime boundary layer for heights falling below the lowest range gate and daytime convective boundary layer maxima rising above the highest. Due to the constraints of the instrument and the algorithm it is recommended that the boundary layer height product be constrained to the hours of 8am to 7pm.
Pattern recognition for passive polarimetric data using nonparametric classifiers
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.
2005-08-01
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
A new non-iterative reconstruction method for the electrical impedance tomography problem
NASA Astrophysics Data System (ADS)
Ferreira, A. D.; Novotny, A. A.
2017-03-01
The electrical impedance tomography (EIT) problem consists in determining the distribution of the electrical conductivity of a medium subject to a set of current fluxes, from measurements of the corresponding electrical potentials on its boundary. EIT is probably the most studied inverse problem since the fundamental works by Calderón from the 1980s. It has many relevant applications in medicine (detection of tumors), geophysics (localization of mineral deposits) and engineering (detection of corrosion in structures). In this work, we are interested in reconstructing a number of anomalies with different electrical conductivity from the background. Since the EIT problem is written in the form of an overdetermined boundary value problem, the idea is to rewrite it as a topology optimization problem. In particular, a shape functional measuring the misfit between the boundary measurements and the electrical potentials obtained from the model is minimized with respect to a set of ball-shaped anomalies by using the concept of topological derivatives. It means that the objective functional is expanded and then truncated up to the second order term, leading to a quadratic and strictly convex form with respect to the parameters under consideration. Thus, a trivial optimization step leads to a non-iterative second order reconstruction algorithm. As a result, the reconstruction process becomes very robust with respect to noisy data and independent of any initial guess. Finally, in order to show the effectiveness of the devised reconstruction algorithm, some numerical experiments into two spatial dimensions are presented, taking into account total and partial boundary measurements.
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2018-05-01
The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.
Integrated segmentation of cellular structures
NASA Astrophysics Data System (ADS)
Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo
2011-03-01
Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.
Stahl, Joachim S; Wang, Song
2008-03-01
Many natural and man-made structures have a boundary that shows a certain level of bilateral symmetry, a property that plays an important role in both human and computer vision. In this paper, we present a new grouping method for detecting closed boundaries with symmetry. We first construct a new type of grouping token in the form of symmetric trapezoids by pairing line segments detected from the image. A closed boundary can then be achieved by connecting some trapezoids with a sequence of gap-filling quadrilaterals. For such a closed boundary, we define a unified grouping cost function in a ratio form: the numerator reflects the boundary information of proximity and symmetry and the denominator reflects the region information of the enclosed area. The introduction of the region-area information in the denominator is able to avoid a bias toward shorter boundaries. We then develop a new graph model to represent the grouping tokens. In this new graph model, the grouping cost function can be encoded by carefully designed edge weights and the desired optimal boundary corresponds to a special cycle with a minimum ratio-form cost. We finally show that such a cycle can be found in polynomial time using a previous graph algorithm. We implement this symmetry-grouping method and test it on a set of synthetic data and real images. The performance is compared to two previous grouping methods that do not consider symmetry in their grouping cost functions.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
Learn, R; Feigenbaum, E
2016-06-01
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Learn, R.; Feigenbaum, E.
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
Learn, R.; Feigenbaum, E.
2016-05-27
Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.
Subband-Based Group Delay Segmentation of Spontaneous Speech into Syllable-Like Units
NASA Astrophysics Data System (ADS)
Nagarajan, T.; Murthy, H. A.
2004-12-01
In the development of a syllable-centric automatic speech recognition (ASR) system, segmentation of the acoustic signal into syllabic units is an important stage. Although the short-term energy (STE) function contains useful information about syllable segment boundaries, it has to be processed before segment boundaries can be extracted. This paper presents a subband-based group delay approach to segment spontaneous speech into syllable-like units. This technique exploits the additive property of the Fourier transform phase and the deconvolution property of the cepstrum to smooth the STE function of the speech signal and make it suitable for syllable boundary detection. By treating the STE function as a magnitude spectrum of an arbitrary signal, a minimum-phase group delay function is derived. This group delay function is found to be a better representative of the STE function for syllable boundary detection. Although the group delay function derived from the STE function of the speech signal contains segment boundaries, the boundaries are difficult to determine in the context of long silences, semivowels, and fricatives. In this paper, these issues are specifically addressed and algorithms are developed to improve the segmentation performance. The speech signal is first passed through a bank of three filters, corresponding to three different spectral bands. The STE functions of these signals are computed. Using these three STE functions, three minimum-phase group delay functions are derived. By combining the evidence derived from these group delay functions, the syllable boundaries are detected. Further, a multiresolution-based technique is presented to overcome the problem of shift in segment boundaries during smoothing. Experiments carried out on the Switchboard and OGI-MLTS corpora show that the error in segmentation is at most 25 milliseconds for 67% and 76.6% of the syllable segments, respectively.
NASA Astrophysics Data System (ADS)
Lu, J.; Egger, J.; Wimmer, A.; Großkopf, S.; Freisleben, B.
2008-03-01
In this paper we present an efficient algorithm for the segmentation of the inner and outer boundary of thoratic and abdominal aortic aneurysms (TAA & AAA) in computed tomography angiography (CTA) acquisitions. The aneurysm segmentation includes two steps: first, the inner boundary is segmented based on a grey level model with two thresholds; then, an adapted active contour model approach is applied to the more complicated outer boundary segmentation, with its initialization based on the available inner boundary segmentation. An opacity image, which aims at enhancing important features while reducing spurious structures, is calculated from the CTA images and employed to guide the deformation of the model. In addition, the active contour model is extended by a constraint force that prevents intersections of the inner and outer boundary and keeps the outer boundary at a distance, given by the thrombus thickness, to the inner boundary. Based upon the segmentation results, we can measure the aneurysm size at each centerline point on the centerline orthogonal multiplanar reformatting (MPR) plane. Furthermore, a 3D TAA or AAA model is reconstructed from the set of segmented contours, and the presence of endoleaks is detected and highlighted. The implemented method has been evaluated on nine clinical CTA data sets with variations in anatomy and location of the pathology and has shown promising results.
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhang, Jun; Tian, Jinwen
2015-12-01
Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.
Automated boundary segmentation and wound analysis for longitudinal corneal OCT images
NASA Astrophysics Data System (ADS)
Wang, Fei; Shi, Fei; Zhu, Weifang; Pan, Lingjiao; Chen, Haoyu; Huang, Haifan; Zheng, Kangkeng; Chen, Xinjian
2017-03-01
Optical coherence tomography (OCT) has been widely applied in the examination and diagnosis of corneal diseases, but the information directly achieved from the OCT images by manual inspection is limited. We propose an automatic processing method to assist ophthalmologists in locating the boundaries in corneal OCT images and analyzing the recovery of corneal wounds after treatment from longitudinal OCT images. It includes the following steps: preprocessing, epithelium and endothelium boundary segmentation and correction, wound detection, corneal boundary fitting and wound analysis. The method was tested on a data set with longitudinal corneal OCT images from 20 subjects. Each subject has five images acquired after corneal operation over a period of time. The segmentation and classification accuracy of the proposed algorithm is high and can be used for analyzing wound recovery after corneal surgery.
Motion-seeded object-based attention for dynamic visual imagery
NASA Astrophysics Data System (ADS)
Huber, David J.; Khosla, Deepak; Kim, Kyungnam
2017-05-01
This paper† describes a novel system that finds and segments "objects of interest" from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.
NASA Astrophysics Data System (ADS)
Khambampati, A. K.; Rashid, A.; Kim, B. S.; Liu, Dong; Kim, S.; Kim, K. Y.
2010-04-01
EIT has been used for the dynamic estimation of organ boundaries. One specific application in this context is the estimation of lung boundaries during pulmonary circulation. This would help track the size and shape of lungs of the patients suffering from diseases like pulmonary edema and acute respiratory failure (ARF). The dynamic boundary estimation of the lungs can also be utilized to set and control the air volume and pressure delivered to the patients during artificial ventilation. In this paper, the expectation-maximization (EM) algorithm is used as an inverse algorithm to estimate the non-stationary lung boundary. The uncertainties caused in Kalman-type filters due to inaccurate selection of model parameters are overcome using EM algorithm. Numerical experiments using chest shaped geometry are carried out with proposed method and the performance is compared with extended Kalman filter (EKF). Results show superior performance of EM in estimation of the lung boundary.
Applying Workspace Limitations in a Velocity-Controlled Robotic Mechanism
NASA Technical Reports Server (NTRS)
Abdallah, Muhammad E. (Inventor); Hargrave, Brian (Inventor); Platt, Robert J., Jr. (Inventor)
2014-01-01
A robotic system includes a robotic mechanism responsive to velocity control signals, and a permissible workspace defined by a convex-polygon boundary. A host machine determines a position of a reference point on the mechanism with respect to the boundary, and includes an algorithm for enforcing the boundary by automatically shaping the velocity control signals as a function of the position, thereby providing smooth and unperturbed operation of the mechanism along the edges and corners of the boundary. The algorithm is suited for application with higher speeds and/or external forces. A host machine includes an algorithm for enforcing the boundary by shaping the velocity control signals as a function of the reference point position, and a hardware module for executing the algorithm. A method for enforcing the convex-polygon boundary is also provided that shapes a velocity control signal via a host machine as a function of the reference point position.
A new morphology algorithm for shoreline extraction from DEM data
NASA Astrophysics Data System (ADS)
Yousef, Amr H.; Iftekharuddin, Khan; Karim, Mohammad
2013-03-01
Digital elevation models (DEMs) are a digital representation of elevations at regularly spaced points. They provide an accurate tool to extract the shoreline profiles. One of the emerging sources of creating them is light detection and ranging (LiDAR) that can capture a highly dense cloud points with high resolution that can reach 15 cm and 100 cm in the vertical and horizontal directions respectively in short periods of time. In this paper we present a multi-step morphological algorithm to extract shorelines locations from the DEM data and a predefined tidal datum. Unlike similar approaches, it utilizes Lowess nonparametric regression to estimate the missing values within the DEM file. Also, it will detect and eliminate the outliers and errors that result from waves, ships, etc by means of anomality test with neighborhood constrains. Because, there might be some significant broken regions such as branches and islands, it utilizes a constrained morphological open and close to reduce these artifacts that can affect the extracted shorelines. In addition, it eliminates docks, bridges and fishing piers along the extracted shorelines by means of Hough transform. Based on a specific tidal datum, the algorithm will segment the DEM data into water and land objects. Without sacrificing the accuracy and the spatial details of the extracted boundaries, the algorithm should smooth and extract the shoreline profiles by tracing the boundary pixels between the land and the water segments. For given tidal values, we qualitatively assess the visual quality of the extracted shorelines by superimposing them on the available aerial photographs.
Vision-based weld pool boundary extraction and width measurement during keyhole fiber laser welding
NASA Astrophysics Data System (ADS)
Luo, Masiyang; Shin, Yung C.
2015-01-01
In keyhole fiber laser welding processes, the weld pool behavior is essential to determining welding quality. To better observe and control the welding process, the accurate extraction of the weld pool boundary as well as the width is required. This work presents a weld pool edge detection technique based on an off axial green illumination laser and a coaxial image capturing system that consists of a CMOS camera and optic filters. According to the difference of image quality, a complete developed edge detection algorithm is proposed based on the local maximum gradient of greyness searching approach and linear interpolation. The extracted weld pool geometry and the width are validated by the actual welding width measurement and predictions by a numerical multi-phase model.
2010-04-27
Dirichlet boundary data DP̃ (x, y) at the entire plane P̃ . Then one can solve the following boundary value problem in the half space below P̃ ∆w − s2w...which we wanted to be a plane wave when reaching the bottom side of the prism of Figure 1, where measurements were conducted. But actually this 14 was a...initializing wave field is a plane wave. On the other hand, a visual inspection of the output experimental data has revealed to us that actually we had a
Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin
2013-01-01
Summary Background Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms. Objectives In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging. Methods It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts' annotation and those generated using the automatic segmentation algorithm. Results The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation. Conclusion The proposed overlapping cell segmentation algorithm can accurately detect the center of each overlapping cell and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation. PMID:22526139
Automated measurement of stent strut coverage in intravascular optical coherence tomography
NASA Astrophysics Data System (ADS)
Ahn, Chi Young; Kim, Byeong-Keuk; Hong, Myeong-Ki; Jang, Yangsoo; Heo, Jung; Joo, Chulmin; Seo, Jin Keun
2015-02-01
Optical coherence tomography (OCT) is a non-invasive, cross-sectional imaging modality that has become a prominent imaging method in percutaneous intracoronary intervention. We present an automated detection algorithm for stent strut coordinates and coverage in OCT images. The algorithm for stent strut detection is composed of a coordinate transformation from the polar to the Cartesian domains and application of second derivative operators in the radial and the circumferential directions. Local region-based active contouring was employed to detect lumen boundaries. We applied the method to the OCT pullback images acquired from human patients in vivo to quantitatively measure stent strut coverage. The validation studies against manual expert assessments demonstrated high Pearson's coefficients ( R = 0.99) in terms of the stent strut coordinates, with no significant bias. An averaged Hausdorff distance of < 120 μm was obtained for vessel border detection. Quantitative comparison in stent strut to vessel wall distance found a bias of < 12.3 μm and a 95% confidence of < 110 μm.
Variable Threshold Method for Determining the Boundaries of Imaged Subvisible Particles.
Cavicchi, Richard E; Collett, Cayla; Telikepalli, Srivalli; Hu, Zhishang; Carrier, Michael; Ripple, Dean C
2017-06-01
An accurate assessment of particle characteristics and concentrations in pharmaceutical products by flow imaging requires accurate particle sizing and morphological analysis. Analysis of images begins with the definition of particle boundaries. Commonly a single threshold defines the level for a pixel in the image to be included in the detection of particles, but depending on the threshold level, this results in either missing translucent particles or oversizing of less transparent particles due to the halos and gradients in intensity near the particle boundaries. We have developed an imaging analysis algorithm that sets the threshold for a particle based on the maximum gray value of the particle. We show that this results in tighter boundaries for particles with high contrast, while conserving the number of highly translucent particles detected. The method is implemented as a plugin for FIJI, an open-source image analysis software. The method is tested for calibration beads in water and glycerol/water solutions, a suspension of microfabricated rods, and stir-stressed aggregates made from IgG. The result is that appropriate thresholds are automatically set for solutions with a range of particle properties, and that improved boundaries will allow for more accurate sizing results and potentially improved particle classification studies. Published by Elsevier Inc.
A novel automatic segmentation workflow of axial breast DCE-MRI
NASA Astrophysics Data System (ADS)
Besbes, Feten; Gargouri, Norhene; Damak, Alima; Sellami, Dorra
2018-04-01
In this paper we propose a novel process of a fully automatic breast tissue segmentation which is independent from expert calibration and contrast. The proposed algorithm is composed by two major steps. The first step consists in the detection of breast boundaries. It is based on image content analysis and Moore-Neighbour tracing algorithm. As a processing step, Otsu thresholding and neighbors algorithm are applied. Then, the external area of breast is removed to get an approximated breast region. The second preprocessing step is the delineation of the chest wall which is considered as the lowest cost path linking three key points; These points are located automatically at the breast. They are respectively, the left and right boundary points and the middle upper point placed at the sternum region using statistical method. For the minimum cost path search problem, we resolve it through Dijkstra algorithm. Evaluation results reveal the robustness of our process face to different breast densities, complex forms and challenging cases. In fact, the mean overlap between manual segmentation and automatic segmentation through our method is 96.5%. A comparative study shows that our proposed process is competitive and faster than existing methods. The segmentation of 120 slices with our method is achieved at least in 20.57+/-5.2s.
Robust linearized image reconstruction for multifrequency EIT of the breast.
Boverman, Gregory; Kao, Tzu-Jen; Kulkarni, Rujuta; Kim, Bong Seok; Isaacson, David; Saulnier, Gary J; Newell, Jonathan C
2008-10-01
Electrical impedance tomography (EIT) is a developing imaging modality that is beginning to show promise for detecting and characterizing tumors in the breast. At Rensselaer Polytechnic Institute, we have developed a combined EIT-tomosynthesis system that allows for the coregistered and simultaneous analysis of the breast using EIT and X-ray imaging. A significant challenge in EIT is the design of computationally efficient image reconstruction algorithms which are robust to various forms of model mismatch. Specifically, we have implemented a scaling procedure that is robust to the presence of a thin highly-resistive layer of skin at the boundary of the breast and we have developed an algorithm to detect and exclude from the image reconstruction electrodes that are in poor contact with the breast. In our initial clinical studies, it has been difficult to ensure that all electrodes make adequate contact with the breast, and thus procedures for the use of data sets containing poorly contacting electrodes are particularly important. We also present a novel, efficient method to compute the Jacobian matrix for our linearized image reconstruction algorithm by reducing the computation of the sensitivity for each voxel to a quadratic form. Initial clinical results are presented, showing the potential of our algorithms to detect and localize breast tumors.
Piccinelli, Marina; Faber, Tracy L; Arepalli, Chesnal D; Appia, Vikram; Vinten-Johansen, Jakob; Schmarkey, Susan L; Folks, Russell D; Garcia, Ernest V; Yezzi, Anthony
2014-02-01
Accurate alignment between cardiac CT angiographic studies (CTA) and nuclear perfusion images is crucial for improved diagnosis of coronary artery disease. This study evaluated in an animal model the accuracy of a CTA fully automated biventricular segmentation algorithm, a necessary step for automatic and thus efficient PET/CT alignment. Twelve pigs with acute infarcts were imaged using Rb-82 PET and 64-slice CTA. Post-mortem myocardium mass measurements were obtained. Endocardial and epicardial myocardial boundaries were manually and automatically detected on the CTA and both segmentations used to perform PET/CT alignment. To assess the segmentation performance, image-based myocardial masses were compared to experimental data; the hand-traced profiles were used as a reference standard to assess the global and slice-by-slice robustness of the automated algorithm in extracting myocardium, LV, and RV. Mean distances between the automated and the manual 3D segmented surfaces were computed. Finally, differences in rotations and translations between the manual and automatic surfaces were estimated post-PET/CT alignment. The largest, smallest, and median distances between interactive and automatic surfaces averaged 1.2 ± 2.1, 0.2 ± 1.6, and 0.7 ± 1.9 mm. The average angular and translational differences in CT/PET alignments were 0.4°, -0.6°, and -2.3° about x, y, and z axes, and 1.8, -2.1, and 2.0 mm in x, y, and z directions. Our automatic myocardial boundary detection algorithm creates surfaces from CTA that are similar in accuracy and provide similar alignments with PET as those obtained from interactive tracing. Specific difficulties in a reliable segmentation of the apex and base regions will require further improvements in the automated technique.
Iterative methods for plasma sheath calculations: Application to spherical probe
NASA Technical Reports Server (NTRS)
Parker, L. W.; Sullivan, E. C.
1973-01-01
The computer cost of a Poisson-Vlasov iteration procedure for the numerical solution of a steady-state collisionless plasma-sheath problem depends on: (1) the nature of the chosen iterative algorithm, (2) the position of the outer boundary of the grid, and (3) the nature of the boundary condition applied to simulate a condition at infinity (as in three-dimensional probe or satellite-wake problems). Two iterative algorithms, in conjunction with three types of boundary conditions, are analyzed theoretically and applied to the computation of current-voltage characteristics of a spherical electrostatic probe. The first algorithm was commonly used by physicists, and its computer costs depend primarily on the boundary conditions and are only slightly affected by the mesh interval. The second algorithm is not commonly used, and its costs depend primarily on the mesh interval and slightly on the boundary conditions.
NASA Astrophysics Data System (ADS)
Zhang, Ka; Sheng, Yehua; Gong, Zhijun; Ye, Chun; Li, Yongqiang; Liang, Cheng
2007-06-01
As an important sub-system in intelligent transportation system (ITS), the detection and recognition of traffic signs from mobile images is becoming one of the hot spots in the international research field of ITS. Considering the problem of traffic sign automatic detection in motion images, a new self-adaptive algorithm for traffic sign detection based on color and shape features is proposed in this paper. Firstly, global statistical color features of different images are computed based on statistics theory. Secondly, some self-adaptive thresholds and special segmentation rules for image segmentation are designed according to these global color features. Then, for red, yellow and blue traffic signs, the color image is segmented to three binary images by these thresholds and rules. Thirdly, if the number of white pixels in the segmented binary image exceeds the filtering threshold, the binary image should be further filtered. Fourthly, the method of gray-value projection is used to confirm top, bottom, left and right boundaries for candidate regions of traffic signs in the segmented binary image. Lastly, if the shape feature of candidate region satisfies the need of real traffic sign, this candidate region is confirmed as the detected traffic sign region. The new algorithm is applied to actual motion images of natural scenes taken by a CCD camera of the mobile photogrammetry system in Nanjing at different time. The experimental results show that the algorithm is not only simple, robust and more adaptive to natural scene images, but also reliable and high-speed on real traffic sign detection.
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
Automatic detection of larynx cancer from contrast-enhanced magnetic resonance images
NASA Astrophysics Data System (ADS)
Doshi, Trushali; Soraghan, John; Grose, Derek; MacKenzie, Kenneth; Petropoulakis, Lykourgos
2015-03-01
Detection of larynx cancer from medical imaging is important for the quantification and for the definition of target volumes in radiotherapy treatment planning (RTP). Magnetic resonance imaging (MRI) is being increasingly used in RTP due to its high resolution and excellent soft tissue contrast. Manually detecting larynx cancer from sequential MRI is time consuming and subjective. The large diversity of cancer in terms of geometry, non-distinct boundaries combined with the presence of normal anatomical regions close to the cancer regions necessitates the development of automatic and robust algorithms for this task. A new automatic algorithm for the detection of larynx cancer from 2D gadoliniumenhanced T1-weighted (T1+Gd) MRI to assist clinicians in RTP is presented. The algorithm employs edge detection using spatial neighborhood information of pixels and incorporates this information in a fuzzy c-means clustering process to robustly separate different tissues types. Furthermore, it utilizes the information of the expected cancerous location for cancer regions labeling. Comparison of this automatic detection system with manual clinical detection on real T1+Gd axial MRI slices of 2 patients (24 MRI slices) with visible larynx cancer yields an average dice similarity coefficient of 0.78+/-0.04 and average root mean square error of 1.82+/-0.28 mm. Preliminary results show that this fully automatic system can assist clinicians in RTP by obtaining quantifiable and non-subjective repeatable detection results in a particular time-efficient and unbiased fashion.
An adaptive tracker for ShipIR/NTCS
NASA Astrophysics Data System (ADS)
Ramaswamy, Srinivasan; Vaitekunas, David A.
2015-05-01
A key component in any image-based tracking system is the adaptive tracking algorithm used to segment the image into potential targets, rank-and-select the best candidate target, and the gating of the selected target to further improve tracker performance. This paper will describe a new adaptive tracker algorithm added to the naval threat countermeasure simulator (NTCS) of the NATO-standard ship signature model (ShipIR). The new adaptive tracking algorithm is an optional feature used with any of the existing internal NTCS or user-defined seeker algorithms (e.g., binary centroid, intensity centroid, and threshold intensity centroid). The algorithm segments the detected pixels into clusters, and the smallest set of clusters that meet the detection criterion is obtained by using a knapsack algorithm to identify the set of clusters that should not be used. The rectangular area containing the chosen clusters defines an inner boundary, from which a weighted centroid is calculated as the aim-point. A track-gate is then positioned around the clusters, taking into account the rate of change of the bounding area and compensating for any gimbal displacement. A sequence of scenarios is used to test the new tracking algorithm on a generic unclassified DDG ShipIR model, with and without flares, and demonstrate how some of the key seeker signals are impacted by both the ship and flare intrinsic signatures.
An intelligent subtitle detection model for locating television commercials.
Huang, Yo-Ping; Hsu, Liang-Wei; Sandnes, Frode-Eika
2007-04-01
A strategy for locating television (TV) commercials in TV programs is proposed. Based on the observation that most TV commercials do not have subtitles, the first stage exploits six subtitle constraints and an adaptive neurofuzzy inference system model to determine whether a frame contains a subtitle or not. The second stage involves locating the mark-in/mark-out points using a genetic algorithm. An interactive user interface allows users to efficiently identify and fine-tune the exact boundaries separating the commercials from the program content. Furthermore, erroneous boundaries are manually corrected. Experimental results show that the precision rate and recall rates exceed 90%.
Fast and objective detection and analysis of structures in downhole images
NASA Astrophysics Data System (ADS)
Wedge, Daniel; Holden, Eun-Jung; Dentith, Mike; Spadaccini, Nick
2017-09-01
Downhole acoustic and optical televiewer images, and formation microimager (FMI) logs are important datasets for structural and geotechnical analyses for the mineral and petroleum industries. Within these data, dipping planar structures appear as sinusoids, often in incomplete form and in abundance. Their detection is a labour intensive and hence expensive task and as such is a significant bottleneck in data processing as companies may have hundreds of kilometres of logs to process each year. We present an image analysis system that harnesses the power of automated image analysis and provides an interactive user interface to support the analysis of televiewer images by users with different objectives. Our algorithm rapidly produces repeatable, objective results. We have embedded it in an interactive workflow to complement geologists' intuition and experience in interpreting data to improve efficiency and assist, rather than replace the geologist. The main contributions include a new image quality assessment technique for highlighting image areas most suited to automated structure detection and for detecting boundaries of geological zones, and a novel sinusoid detection algorithm for detecting and selecting sinusoids with given confidence levels. Further tools are provided to perform rapid analysis of and further detection of structures e.g. as limited to specific orientations.
Note: A manifold ranking based saliency detection method for camera.
Zhang, Libo; Sun, Yihan; Luo, Tiejian; Rahman, Mohammad Muntasir
2016-09-01
Research focused on salient object region in natural scenes has attracted a lot in computer vision and has widely been used in many applications like object detection and segmentation. However, an accurate focusing on the salient region, while taking photographs of the real-world scenery, is still a challenging task. In order to deal with the problem, this paper presents a novel approach based on human visual system, which works better with the usage of both background prior and compactness prior. In the proposed method, we eliminate the unsuitable boundary with a fixed threshold to optimize the image boundary selection which can provide more precise estimations. Then, the object detection, which is optimized with compactness prior, is obtained by ranking with background queries. Salient objects are generally grouped together into connected areas that have compact spatial distributions. The experimental results on three public datasets demonstrate that the precision and robustness of the proposed algorithm have been improved obviously.
Data fusion of multi-scale representations for structural damage detection
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-01-01
Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.
Detection of bone disease by hybrid SST-watershed x-ray image segmentation
NASA Astrophysics Data System (ADS)
Sanei, Saeid; Azron, Mohammad; Heng, Ong Sim
2001-07-01
Detection of diagnostic features from X-ray images is favorable due to the low cost of these images. Accurate detection of the bone metastasis region greatly assists physicians to monitor the treatment and to remove the cancerous tissue by surgery. A hybrid SST-watershed algorithm, here, efficiently detects the boundary of the diseased regions. Shortest Spanning Tree (SST), based on graph theory, is one of the most powerful tools in grey level image segmentation. The method converts the images into arbitrary-shape closed segments of distinct grey levels. To do that, the image is initially mapped to a tree. Then using RSST algorithm the image is segmented to a certain number of arbitrary-shaped regions. However, in fine segmentation, over-segmentation causes loss of objects of interest. In coarse segmentation, on the other hand, SST-based method suffers from merging the regions belonged to different objects. By applying watershed algorithm, the large segments are divided into the smaller regions based on the number of catchment's basins for each segment. The process exploits bi-level watershed concept to separate each multi-lobe region into a number of areas each corresponding to an object (in our case a cancerous region of the bone,) disregarding their homogeneity in grey level.
Reconstruction of multiple cracks from experimental electrostatic boundary measurements
NASA Technical Reports Server (NTRS)
Bryan, Kurt; Liepa, Valdis; Vogelius, Michael
1993-01-01
An algorithm for recovering a collection of linear cracks in a homogeneous electrical conductor from boundary measurements of voltages induced by specified current fluxes is described. The technique is a variation of Newton's method and is based on taking weighted averages of the boundary data. An apparatus that was constructed specifically for generating laboratory data on which to test the algorithm is also described. The algorithm is applied to a number of different test cases and the results are discussed.
A novel iris localization algorithm using correlation filtering
NASA Astrophysics Data System (ADS)
Pohit, Mausumi; Sharma, Jitu
2015-06-01
Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.
Mass detection with digitized screening mammograms by using Gabor features
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Agyepong, Kwabena
2007-03-01
Breast cancer is the leading cancer among American women. The current lifetime risk of developing breast cancer is 13.4% (one in seven). Mammography is the most effective technology presently available for breast cancer screening. With digital mammograms computer-aided detection (CAD) has proven to be a useful tool for radiologists. In this paper, we focus on mass detection that is a common category of breast cancers relative to calcification and architecture distortion. We propose a new mass detection algorithm utilizing Gabor filters, termed as "Gabor Mass Detection" (GMD). There are three steps in the GMD algorithm, (1) preprocessing, (2) generating alarms and (3) classification (reducing false alarms). Down-sampling, quantization, denoising and enhancement are done in the preprocessing step. Then a total of 30 Gabor filtered images (along 6 bands by 5 orientations) are produced. Alarm segments are generated by thresholding four Gabor images of full orientations (Stage-I classification) with image-dependent thresholds computed via histogram analysis. Next a set of edge histogram descriptors (EHD) are extracted from 24 Gabor images (6 by 4) that will be used for Stage-II classification. After clustering EHD features with fuzzy C-means clustering method, a k-nearest neighbor classifier is used to reduce the number of false alarms. We initially analyzed 431 digitized mammograms (159 normal images vs. 272 cancerous images, from the DDSM project, University of South Florida) with the proposed GMD algorithm. And a ten-fold cross validation was used for testing the GMD algorithm upon the available data. The GMD performance is as follows: sensitivity (true positive rate) = 0.88 at false positives per image (FPI) = 1.25, and the area under the ROC curve = 0.83. The overall performance of the GMD algorithm is satisfactory and the accuracy of locating masses (highlighting the boundaries of suspicious areas) is relatively high. Furthermore, the GMD algorithm can successfully detect early-stage (with small values of Assessment & low Subtlety) malignant masses. In addition, Gabor filtered images are used in both stages of classifications, which greatly simplifies the GMD algorithm.
Algorithm for Controlling a Centrifugal Compressor
NASA Technical Reports Server (NTRS)
Benedict, Scott M.
2004-01-01
An algorithm has been developed for controlling a centrifugal compressor that serves as the prime mover in a heatpump system. Experimental studies have shown that the operating conditions for maximum compressor efficiency are close to the boundary beyond which surge occurs. Compressor surge is a destructive condition in which there are instantaneous reversals of flow associated with a high outlet-to-inlet pressure differential. For a given cooling load, the algorithm sets the compressor speed at the lowest possible value while adjusting the inlet guide vane angle and diffuser vane angle to maximize efficiency, subject to an overriding requirement to prevent surge. The onset of surge is detected via the onset of oscillations of the electric current supplied to the compressor motor, associated with surge-induced oscillations of the torque exerted by and on the compressor rotor. The algorithm can be implemented in any of several computer languages.
NASA Astrophysics Data System (ADS)
Chitchian, Shahab; Vincent, Kathleen L.; Vargas, Gracie; Motamedi, Massoud
2012-11-01
We have explored the use of optical coherence tomography (OCT) as a noninvasive tool for assessing the toxicity of topical microbicides, products used to prevent HIV, by monitoring the integrity of the vaginal epithelium. A novel feature-based segmentation algorithm using a nearest-neighbor classifier was developed to monitor changes in the morphology of vaginal epithelium. The two-step automated algorithm yielded OCT images with a clearly defined epithelial layer, enabling differentiation of normal and damaged tissue. The algorithm was robust in that it was able to discriminate the epithelial layer from underlying stroma as well as residual microbicide product on the surface. This segmentation technique for OCT images has the potential to be readily adaptable to the clinical setting for noninvasively defining the boundaries of the epithelium, enabling quantifiable assessment of microbicide-induced damage in vaginal tissue.
Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H
2017-09-01
Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach recently developed in our laboratory. The proposed TifMA algorithm consistently provided higher detection rates than the other three methods, with accuracies greater than 95% for all data. Moreover, our algorithm was able to pinpoint the start and end times of the MNA with an error of less than 1 s in duration, whereas the next-best algorithm had a detection error of more than 2.2 s. The final, most challenging, dataset was collected to verify the performance of the algorithm in discriminating between corrupted data that were usable for accurate HR estimations and data that were nonusable. It was found that on average 48% of the data segments were found to have MNA, and of these, 38% could be used to provide reliable HR estimation.
NASA Technical Reports Server (NTRS)
Maslanik, J. A.; Key, J.
1992-01-01
An expert system framework has been developed to classify sea ice types using satellite passive microwave data, an operational classification algorithm, spatial and temporal information, ice types estimated from a dynamic-thermodynamic model, output from a neural network that detects the onset of melt, and knowledge about season and region. The rule base imposes boundary conditions upon the ice classification, modifies parameters in the ice algorithm, determines a `confidence' measure for the classified data, and under certain conditions, replaces the algorithm output with model output. Results demonstrate the potential power of such a system for minimizing overall error in the classification and for providing non-expert data users with a means of assessing the usefulness of the classification results for their applications.
Compression of Flow Can Reveal Overlapping-Module Organization in Networks
NASA Astrophysics Data System (ADS)
Viamontes Esquivel, Alcides; Rosvall, Martin
2011-10-01
To better understand the organization of overlapping modules in large networks with respect to flow, we introduce the map equation for overlapping modules. In this information-theoretic framework, we use the correspondence between compression and regularity detection. The generalized map equation measures how well we can compress a description of flow in the network when we partition it into modules with possible overlaps. When we minimize the generalized map equation over overlapping network partitions, we detect modules that capture flow and determine which nodes at the boundaries between modules should be classified in multiple modules and to what degree. With a novel greedy-search algorithm, we find that some networks, for example, the neural network of the nematode Caenorhabditis elegans, are best described by modules dominated by hard boundaries, but that others, for example, the sparse European-roads network, have an organization of highly overlapping modules.
Chong, Jo Woon; Dao, Duy K; Salehizadeh, S M A; McManus, David D; Darling, Chad E; Chon, Ki H; Mendelson, Yitzhak
2014-11-01
Motion and noise artifacts (MNA) are a serious obstacle in utilizing photoplethysmogram (PPG) signals for real-time monitoring of vital signs. We present a MNA detection method which can provide a clean vs. corrupted decision on each successive PPG segment. For motion artifact detection, we compute four time-domain parameters: (1) standard deviation of peak-to-peak intervals (2) standard deviation of peak-to-peak amplitudes (3) standard deviation of systolic and diastolic interval ratios, and (4) mean standard deviation of pulse shape. We have adopted a support vector machine (SVM) which takes these parameters from clean and corrupted PPG signals and builds a decision boundary to classify them. We apply several distinct features of the PPG data to enhance classification performance. The algorithm we developed was verified on PPG data segments recorded by simulation, laboratory-controlled and walking/stair-climbing experiments, respectively, and we compared several well-established MNA detection methods to our proposed algorithm. All compared detection algorithms were evaluated in terms of motion artifact detection accuracy, heart rate (HR) error, and oxygen saturation (SpO2) error. For laboratory controlled finger, forehead recorded PPG data and daily-activity movement data, our proposed algorithm gives 94.4, 93.4, and 93.7% accuracies, respectively. Significant reductions in HR and SpO2 errors (2.3 bpm and 2.7%) were noted when the artifacts that were identified by SVM-MNA were removed from the original signal than without (17.3 bpm and 5.4%). The accuracy and error values of our proposed method were significantly higher and lower, respectively, than all other detection methods. Another advantage of our method is its ability to provide highly accurate onset and offset detection times of MNAs. This capability is important for an automated approach to signal reconstruction of only those data points that need to be reconstructed, which is the subject of the companion paper to this article. Finally, our MNA detection algorithm is real-time realizable as the computational speed on the 7-s PPG data segment was found to be only 7 ms with a Matlab code.
Abd-Elhameed, Waleed M.; Doha, Eid H.; Bassuony, Mahmoud A.
2014-01-01
Two numerical algorithms based on dual-Petrov-Galerkin method are developed for solving the integrated forms of high odd-order boundary value problems (BVPs) governed by homogeneous and nonhomogeneous boundary conditions. Two different choices of trial functions and test functions which satisfy the underlying boundary conditions of the differential equations and the dual boundary conditions are used for this purpose. These choices lead to linear systems with specially structured matrices that can be efficiently inverted, hence greatly reducing the cost. The various matrix systems resulting from these discretizations are carefully investigated, especially their complexities and their condition numbers. Numerical results are given to illustrate the efficiency of the proposed algorithms, and some comparisons with some other methods are made. PMID:24616620
Maxwell, Susan K.
2010-01-01
Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. PMID:21135917
Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy
NASA Astrophysics Data System (ADS)
Nabizadeh, Nooshin; John, Nigel
2014-03-01
Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.
NASA Astrophysics Data System (ADS)
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
Samanipour, Saer; Baz-Lomba, Jose A; Alygizakis, Nikiforos A; Reid, Malcolm J; Thomaidis, Nikolaos S; Thomas, Kevin V
2017-06-09
LC-HR-QTOF-MS recently has become a commonly used approach for the analysis of complex samples. However, identification of small organic molecules in complex samples with the highest level of confidence is a challenging task. Here we report on the implementation of a two stage algorithm for LC-HR-QTOF-MS datasets. We compared the performances of the two stage algorithm, implemented via NIVA_MZ_Analyzer™, with two commonly used approaches (i.e. feature detection and XIC peak picking, implemented via UNIFI by Waters and TASQ by Bruker, respectively) for the suspect analysis of four influent wastewater samples. We first evaluated the cross platform compatibility of LC-HR-QTOF-MS datasets generated via instruments from two different manufacturers (i.e. Waters and Bruker). Our data showed that with an appropriate spectral weighting function the spectra recorded by the two tested instruments are comparable for our analytes. As a consequence, we were able to perform full spectral comparison between the data generated via the two studied instruments. Four extracts of wastewater influent were analyzed for 89 analytes, thus 356 detection cases. The analytes were divided into 158 detection cases of artificial suspect analytes (i.e. verified by target analysis) and 198 true suspects. The two stage algorithm resulted in a zero rate of false positive detection, based on the artificial suspect analytes while producing a rate of false negative detection of 0.12. For the conventional approaches, the rates of false positive detection varied between 0.06 for UNIFI and 0.15 for TASQ. The rates of false negative detection for these methods ranged between 0.07 for TASQ and 0.09 for UNIFI. The effect of background signal complexity on the two stage algorithm was evaluated through the generation of a synthetic signal. We further discuss the boundaries of applicability of the two stage algorithm. The importance of background knowledge and experience in evaluating the reliability of results during the suspect screening was evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sehgal, Chandra M.; Kao, Yen H.; Cary, Ted W.; Arger, Peter H.; Mohler, Emile R.
2005-04-01
Endothelial dysfunction in response to vasoactive stimuli is closely associated with diseases such as atherosclerosis, hypertension and congestive heart failure. The current method of using ultrasound to image the brachial artery along the longitudinal axis is insensitive for measuring the small vasodilatation that occurs in response to flow mediation. The goal of this study is to overcome this limitation by using cross-sectional imaging of the brachial artery in conjunction with the User-Guided Automated Boundary Detection (UGABD) algorithm for extracting arterial boundaries. High-resolution ultrasound imaging was performed on rigid plastic tubing, on elastic rubber tubing phantoms with steady and pulsatile flow, and on the brachial artery of a healthy volunteer undergoing reactive hyperemia. The area of cross section of time-series images was analyzed by UGABD by propagating the boundary from one frame to the next. The UGABD results were compared by linear correlation with those obtained by manual tracing. UGABD measured the cross-sectional area of the phantom tubing to within 5% of the true area. The algorithm correctly detected pulsatile vasomotion in phantoms and in the brachial artery. A comparison of area measurements made using UGABD with those made by manual tracings yielded a correlation of 0.9 and 0.8 for phantoms and arteries, respectively. The peak vasodilatation due to reactive hyperemia was two orders of magnitude greater in pixel count than that measured by longitudinal imaging. Cross-sectional imaging is more sensitive than longitudinal imaging for measuring flow-mediated dilatation of brachial artery, and thus may be more suitable for evaluating endothelial dysfunction.
Two-wavelength Lidar inversion algorithm for determining planetary boundary layer height
NASA Astrophysics Data System (ADS)
Liu, Boming; Ma, Yingying; Gong, Wei; Jian, Yang; Ming, Zhang
2018-02-01
This study proposes a two-wavelength Lidar inversion algorithm to determine the boundary layer height (BLH) based on the particles clustering. Color ratio and depolarization ratio are used to analyze the particle distribution, based on which the proposed algorithm can overcome the effects of complex aerosol layers to calculate the BLH. The algorithm is used to determine the top of the boundary layer under different mixing state. Experimental results demonstrate that the proposed algorithm can determine the top of the boundary layer even in a complex case. Moreover, it can better deal with the weak convection conditions. Finally, experimental data from June 2015 to December 2015 were used to verify the reliability of the proposed algorithm. The correlation between the results of the proposed algorithm and the manual method is R2 = 0.89 with a RMSE of 131 m and mean bias of 49 m; the correlation between the results of the ideal profile fitting method and the manual method is R2 = 0.64 with a RMSE of 270 m and a mean bias of 165 m; and the correlation between the results of the wavelet covariance transform method and manual method is R2 = 0.76, with a RMSE of 196 m and mean bias of 23 m. These findings indicate that the proposed algorithm has better reliability and stability than traditional algorithms.
Privacy protection versus cluster detection in spatial epidemiology.
Olson, Karen L; Grannis, Shaun J; Mandl, Kenneth D
2006-11-01
Patient data that includes precise locations can reveal patients' identities, whereas data aggregated into administrative regions may preserve privacy and confidentiality. We investigated the effect of varying degrees of address precision (exact latitude and longitude vs the center points of zip code or census tracts) on detection of spatial clusters of cases. We simulated disease outbreaks by adding supplementary spatially clustered emergency department visits to authentic hospital emergency department syndromic surveillance data. We identified clusters with a spatial scan statistic and evaluated detection rate and accuracy. More clusters were identified, and clusters were more accurately detected, when exact locations were used. That is, these clusters contained at least half of the simulated points and involved few additional emergency department visits. These results were especially apparent when the synthetic clustered points crossed administrative boundaries and fell into multiple zip code or census tracts. The spatial cluster detection algorithm performed better when addresses were analyzed as exact locations than when they were analyzed as center points of zip code or census tracts, particularly when the clustered points crossed administrative boundaries. Use of precise addresses offers improved performance, but this practice must be weighed against privacy concerns in the establishment of public health data exchange policies.
Integral Method of Boundary Characteristics: Neumann Condition
NASA Astrophysics Data System (ADS)
Kot, V. A.
2018-05-01
A new algorithm, based on systems of identical equalities with integral and differential boundary characteristics, is proposed for solving boundary-value problems on the heat conduction in bodies canonical in shape at a Neumann boundary condition. Results of a numerical analysis of the accuracy of solving heat-conduction problems with variable boundary conditions with the use of this algorithm are presented. The solutions obtained with it can be considered as exact because their errors comprise hundredths and ten-thousandths of a persent for a wide range of change in the parameters of a problem.
Lane detection using Randomized Hough Transform
NASA Astrophysics Data System (ADS)
Mongkonyong, Peerawat; Nuthong, Chaiwat; Siddhichai, Supakorn; Yamakita, Masaki
2018-01-01
According to the report of the Royal Thai Police between 2006 and 2015, lane changing without consciousness is one of the most accident causes. To solve this problem, many methods are considered. Lane Departure Warning System (LDWS) is considered to be one of the potential solutions. LDWS is a mechanism designed to warn the driver when the vehicle begins to move out of its current lane. LDWS contains many parts including lane boundary detection, driver warning and lane marker tracking. This article focuses on the lane boundary detection part. The proposed lane boundary detection detects the lines of the image from the input video and selects the lane marker of the road surface from those lines. Standard Hough Transform (SHT) and Randomized Hough Transform (RHT) are considered in this article. They are used to extract lines of an image. SHT extracts the lines from all of the edge pixels. RHT extracts only the lines randomly picked by the point pairs from edge pixels. RHT algorithm reduces the time and memory usage when compared with SHT. The increase of the threshold value in RHT will increase the voted limit of the line that has a high possibility to be the lane marker, but it also consumes the time and memory. By comparison between SHT and RHT with the different threshold values, 500 frames of input video from the front car camera will be processed. The accuracy and the computational time of RHT are similar to those of SHT in the result of the comparison.
Detecting natural occlusion boundaries using local cues
DiMattina, Christopher; Fox, Sean A.; Lewicki, Michael S.
2012-01-01
Occlusion boundaries and junctions provide important cues for inferring three-dimensional scene organization from two-dimensional images. Although several investigators in machine vision have developed algorithms for detecting occlusions and other edges in natural images, relatively few psychophysics or neurophysiology studies have investigated what features are used by the visual system to detect natural occlusions. In this study, we addressed this question using a psychophysical experiment where subjects discriminated image patches containing occlusions from patches containing surfaces. Image patches were drawn from a novel occlusion database containing labeled occlusion boundaries and textured surfaces in a variety of natural scenes. Consistent with related previous work, we found that relatively large image patches were needed to attain reliable performance, suggesting that human subjects integrate complex information over a large spatial region to detect natural occlusions. By defining machine observers using a set of previously studied features measured from natural occlusions and surfaces, we demonstrate that simple features defined at the spatial scale of the image patch are insufficient to account for human performance in the task. To define machine observers using a more biologically plausible multiscale feature set, we trained standard linear and neural network classifiers on the rectified outputs of a Gabor filter bank applied to the image patches. We found that simple linear classifiers could not match human performance, while a neural network classifier combining filter information across location and spatial scale compared well. These results demonstrate the importance of combining a variety of cues defined at multiple spatial scales for detecting natural occlusions. PMID:23255731
Multiscale-Driven approach to detecting change in Synthetic Aperture Radar (SAR) imagery
NASA Astrophysics Data System (ADS)
Gens, R.; Hogenson, K.; Ajadi, O. A.; Meyer, F. J.; Myers, A.; Logan, T. A.; Arnoult, K., Jr.
2017-12-01
Detecting changes between Synthetic Aperture Radar (SAR) images can be a useful but challenging exercise. SAR with its all-weather capabilities can be an important resource in identifying and estimating the expanse of events such as flooding, river ice breakup, earthquake damage, oil spills, and forest growth, as it can overcome shortcomings of optical methods related to cloud cover. However, detecting change in SAR imagery can be impeded by many factors including speckle, complex scattering responses, low temporal sampling, and difficulty delineating boundaries. In this presentation we use a change detection method based on a multiscale-driven approach. By using information at different resolution levels, we attempt to obtain more accurate change detection maps in both heterogeneous and homogeneous regions. Integrated within the processing flow are processes that 1) improve classification performance by combining Expectation-Maximization algorithms with mathematical morphology, 2) achieve high accuracy in preserving boundaries using measurement level fusion techniques, and 3) combine modern non-local filtering and 2D-discrete stationary wavelet transform to provide robustness against noise. This multiscale-driven approach to change detection has recently been incorporated into the Alaska Satellite Facility (ASF) Hybrid Pluggable Processing Pipeline (HyP3) using radiometrically terrain corrected SAR images. Examples primarily from natural hazards are presented to illustrate the capabilities and limitations of the change detection method.
Detecting kinematic boundary surfaces in phase space: particle mass measurements in SUSY-like events
Debnath, Dipsikha; Gainer, James S.; Kilic, Can; ...
2017-06-19
We critically examine the classic endpoint method for particle mass determination, focusing on difficult corners of parameter space, where some of the measurements are not independent, while others are adversely affected by the experimental resolution. In such scenarios, mass differences can be measured relatively well, but the overall mass scale remains poorly constrained. Using the example of the standard SUSY decay chain q ~→χ ~ 0 2→ℓ ~→χ ~ 0 1 , we demonstrate that sensitivity to the remaining mass scale parameter can be recovered by measuring the two-dimensional kinematical boundary in the relevant three-dimensional phase space of invariant massesmore » squared. We develop an algorithm for detecting this boundary, which uses the geometric properties of the Voronoi tessellation of the data, and in particular, the relative standard deviation (RSD) of the volumes of the neighbors for each Voronoi cell in the tessellation. We propose a new observable, Σ¯ , which is the average RSD per unit area, calculated over the hypothesized boundary. We show that the location of the Σ¯ maximum correlates very well with the true values of the new particle masses. Our approach represents the natural extension of the one-dimensional kinematic endpoint method to the relevant three dimensions of invariant mass phase space.« less
Detecting kinematic boundary surfaces in phase space: particle mass measurements in SUSY-like events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Debnath, Dipsikha; Gainer, James S.; Kilic, Can
We critically examine the classic endpoint method for particle mass determination, focusing on difficult corners of parameter space, where some of the measurements are not independent, while others are adversely affected by the experimental resolution. In such scenarios, mass differences can be measured relatively well, but the overall mass scale remains poorly constrained. Using the example of the standard SUSY decay chain q ~→χ ~ 0 2→ℓ ~→χ ~ 0 1 , we demonstrate that sensitivity to the remaining mass scale parameter can be recovered by measuring the two-dimensional kinematical boundary in the relevant three-dimensional phase space of invariant massesmore » squared. We develop an algorithm for detecting this boundary, which uses the geometric properties of the Voronoi tessellation of the data, and in particular, the relative standard deviation (RSD) of the volumes of the neighbors for each Voronoi cell in the tessellation. We propose a new observable, Σ¯ , which is the average RSD per unit area, calculated over the hypothesized boundary. We show that the location of the Σ¯ maximum correlates very well with the true values of the new particle masses. Our approach represents the natural extension of the one-dimensional kinematic endpoint method to the relevant three dimensions of invariant mass phase space.« less
Detecting kinematic boundary surfaces in phase space: particle mass measurements in SUSY-like events
NASA Astrophysics Data System (ADS)
Debnath, Dipsikha; Gainer, James S.; Kilic, Can; Kim, Doojin; Matchev, Konstantin T.; Yang, Yuan-Pao
2017-06-01
We critically examine the classic endpoint method for particle mass determination, focusing on difficult corners of parameter space, where some of the measurements are not independent, while others are adversely affected by the experimental resolution. In such scenarios, mass differences can be measured relatively well, but the overall mass scale remains poorly constrained. Using the example of the standard SUSY decay chain \\tilde{q}\\to {\\tilde{χ}}_2^0\\to \\tilde{ℓ}\\to {\\tilde{χ}}_1^0 , we demonstrate that sensitivity to the remaining mass scale parameter can be recovered by measuring the two-dimensional kinematical boundary in the relevant three-dimensional phase space of invariant masses squared. We develop an algorithm for detecting this boundary, which uses the geometric properties of the Voronoi tessellation of the data, and in particular, the relative standard deviation (RSD) of the volumes of the neighbors for each Voronoi cell in the tessellation. We propose a new observable, \\overline{Σ} , which is the average RSD per unit area, calculated over the hypothesized boundary. We show that the location of the \\overline{Σ} maximum correlates very well with the true values of the new particle masses. Our approach represents the natural extension of the one-dimensional kinematic endpoint method to the relevant three dimensions of invariant mass phase space.
Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.
2014-01-01
Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358
NASA Astrophysics Data System (ADS)
Liu, Tao; Im, Jungho; Quackenbush, Lindi J.
2015-12-01
This study provides a novel approach to individual tree crown delineation (ITCD) using airborne Light Detection and Ranging (LiDAR) data in dense natural forests using two main steps: crown boundary refinement based on a proposed Fishing Net Dragging (FiND) method, and segment merging based on boundary classification. FiND starts with approximate tree crown boundaries derived using a traditional watershed method with Gaussian filtering and refines these boundaries using an algorithm that mimics how a fisherman drags a fishing net. Random forest machine learning is then used to classify boundary segments into two classes: boundaries between trees and boundaries between branches that belong to a single tree. Three groups of LiDAR-derived features-two from the pseudo waveform generated along with crown boundaries and one from a canopy height model (CHM)-were used in the classification. The proposed ITCD approach was tested using LiDAR data collected over a mountainous region in the Adirondack Park, NY, USA. Overall accuracy of boundary classification was 82.4%. Features derived from the CHM were generally more important in the classification than the features extracted from the pseudo waveform. A comprehensive accuracy assessment scheme for ITCD was also introduced by considering both area of crown overlap and crown centroids. Accuracy assessment using this new scheme shows the proposed ITCD achieved 74% and 78% as overall accuracy, respectively, for deciduous and mixed forest.
Algorithmic network monitoring for a modern water utility: a case study in Jerusalem.
Armon, A; Gutner, S; Rosenberg, A; Scolnicov, H
2011-01-01
We report on the design, deployment, and use of TaKaDu, a real-time algorithmic Water Infrastructure Monitoring solution, with a strong focus on water loss reduction and control. TaKaDu is provided as a commercial service to several customers worldwide. It has been in use at HaGihon, the Jerusalem utility, since mid 2009. Water utilities collect considerable real-time data from their networks, e.g. by means of a SCADA system and sensors measuring flow, pressure, and other data. We discuss how an algorithmic statistical solution analyses this wealth of raw data, flexibly using many types of input and picking out and reporting significant events and failures in the network. Of particular interest to most water utilities is the early detection capability for invisible leaks, also a means for preventing large visible bursts. The system also detects sensor and SCADA failures, various water quality issues, DMA boundary breaches, unrecorded or unintended network changes (like a valve or pump state change), and other events, including types unforeseen during system design. We discuss results from use at HaGihon, showing clear operational value.
COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.
Hromadka, T.V.; Yen, C.C.; Guymon, G.L.
1985-01-01
The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.
NASA Technical Reports Server (NTRS)
Ghil, M.; Balgovind, R.
1979-01-01
The inhomogeneous Cauchy-Riemann equations in a rectangle are discretized by a finite difference approximation. Several different boundary conditions are treated explicitly, leading to algorithms which have overall second-order accuracy. All boundary conditions with either u or v prescribed along a side of the rectangle can be treated by similar methods. The algorithms presented here have nearly minimal time and storage requirements and seem suitable for development into a general-purpose direct Cauchy-Riemann solver for arbitrary boundary conditions.
Computation of the shock-wave boundary layer interaction with flow separation
NASA Technical Reports Server (NTRS)
Ardonceau, P.; Alziary, T.; Aymer, D.
1980-01-01
The boundary layer concept is used to describe the flow near the wall. The external flow is approximated by a pressure displacement relationship (tangent wedge in linearized supersonic flow). The boundary layer equations are solved in finite difference form and the question of the presence and unicity of the solution is considered for the direct problem (assumed pressure) or converse problem (assumed displacement thickness, friction ratio). The coupling algorithm presented implicitly processes the downstream boundary condition necessary to correctly define the interacting boundary layer problem. The algorithm uses a Newton linearization technique to provide a fast convergence.
NASA Astrophysics Data System (ADS)
Wang, X. Y.; Dou, J. M.; Shen, H.; Li, J.; Yang, G. S.; Fan, R. Q.; Shen, Q.
2018-03-01
With the continuous strengthening of power grids, the network structure is becoming more and more complicated. An open and regional data modeling is used to complete the calculation of the protection fixed value based on the local region. At the same time, a high precision, quasi real-time boundary fusion technique is needed to seamlessly integrate the various regions so as to constitute an integrated fault computing platform which can conduct transient stability analysis of covering the whole network with high accuracy and multiple modes, deal with the impact results of non-single fault, interlocking fault and build “the first line of defense” of the power grid. The boundary fusion algorithm in this paper is an automatic fusion algorithm based on the boundary accurate coupling of the networking power grid partition, which takes the actual operation mode for qualification, complete the boundary coupling algorithm of various weak coupling partition based on open-loop mode, improving the fusion efficiency, truly reflecting its transient stability level, and effectively solving the problems of too much data, too many difficulties of partition fusion, and no effective fusion due to mutually exclusive conditions. In this paper, the basic principle of fusion process is introduced firstly, and then the method of boundary fusion customization is introduced by scene description. Finally, an example is given to illustrate the specific algorithm on how it effectively implements the boundary fusion after grid partition and to verify the accuracy and efficiency of the algorithm.
An algorithm to help design fire simulation and other data base work
Romain Mees
1974-01-01
The data necessary for fire simulation may be made available through an algorithm based on tracing of boundaries composed of straight-line segments. Useful assumptions are that if a closed boundary does not contain a given point, then any other closed boundary contained within the former one does not contain the location; and that a given location will be contained in...
A finite element conjugate gradient FFT method for scattering
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Zapp, John; Hsa, Chang-Yu; Volakis, John L.
1990-01-01
An extension of a two dimensional formulation is presented for a three dimensional body of revolution. With the introduction of a Fourier expansion of the vector electric and magnetic fields, a coupled two dimensional system is generated and solved via the finite element method. An exact boundary condition is employed to terminate the mesh and the fast fourier transformation (FFT) is used to evaluate the boundary integrals for low O(n) memory demand when an iterative solution algorithm is used. By virtue of the finite element method, the algorithm is applicable to structures of arbitrary material composition. Several improvements to the two dimensional algorithm are also described. These include: (1) modifications for terminating the mesh at circular boundaries without distorting the convolutionality of the boundary integrals; (2) the development of nonproprietary mesh generation routines for two dimensional applications; (3) the development of preprocessors for interfacing SDRC IDEAS with the main algorithm; and (4) the development of post-processing algorithms based on the public domain package GRAFIC to generate two and three dimensional gray level and color field maps.
Maxwell, Susan K
2010-12-01
Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. Copyright © 2010. Published by Elsevier Ltd.
Non-fragile consensus algorithms for a network of diffusion PDEs with boundary local interaction
NASA Astrophysics Data System (ADS)
Xiong, Jun; Li, Junmin
2017-07-01
In this study, non-fragile consensus algorithm is proposed to solve the average consensus problem of a network of diffusion PDEs, modelled by boundary controlled heat equations. The problem deals with the case where the Neumann-type boundary controllers are corrupted by additive persistent disturbances. To achieve consensus between agents, a linear local interaction rule addressing this requirement is given. The proposed local interaction rules are analysed by applying a Lyapunov-based approach. The multiplicative and additive non-fragile feedback control algorithms are designed and sufficient conditions for the consensus of the multi-agent systems are presented in terms of linear matrix inequalities, respectively. Simulation results are presented to support the effectiveness of the proposed algorithms.
Fully Automated Detection of Cloud and Aerosol Layers in the CALIPSO Lidar Measurements
NASA Technical Reports Server (NTRS)
Vaughan, Mark A.; Powell, Kathleen A.; Kuehn, Ralph E.; Young, Stuart A.; Winker, David M.; Hostetler, Chris A.; Hunt, William H.; Liu, Zhaoyan; McGill, Matthew J.; Getzewich, Brian J.
2009-01-01
Accurate knowledge of the vertical and horizontal extent of clouds and aerosols in the earth s atmosphere is critical in assessing the planet s radiation budget and for advancing human understanding of climate change issues. To retrieve this fundamental information from the elastic backscatter lidar data acquired during the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission, a selective, iterated boundary location (SIBYL) algorithm has been developed and deployed. SIBYL accomplishes its goals by integrating an adaptive context-sensitive profile scanner into an iterated multiresolution spatial averaging scheme. This paper provides an in-depth overview of the architecture and performance of the SIBYL algorithm. It begins with a brief review of the theory of target detection in noise-contaminated signals, and an enumeration of the practical constraints levied on the retrieval scheme by the design of the lidar hardware, the geometry of a space-based remote sensing platform, and the spatial variability of the measurement targets. Detailed descriptions are then provided for both the adaptive threshold algorithm used to detect features of interest within individual lidar profiles and the fully automated multiresolution averaging engine within which this profile scanner functions. The resulting fusion of profile scanner and averaging engine is specifically designed to optimize the trade-offs between the widely varying signal-to-noise ratio of the measurements and the disparate spatial resolutions of the detection targets. Throughout the paper, specific algorithm performance details are illustrated using examples drawn from the existing CALIPSO dataset. Overall performance is established by comparisons to existing layer height distributions obtained by other airborne and space-based lidars.
NASA Astrophysics Data System (ADS)
Hatayama, Ken; Fujiwara, Hiroyuki
1998-05-01
This paper aims to present a new method to calculate surface waves in 3-D sedimentary basin models, based on the direct boundary element method (BEM) with vertical boundaries and normal modes, and to evaluate the excitation of secondary surface waves observed remarkably in basins. Many authors have so far developed numerical techniques to calculate the total 3-D wavefield. However, the calculation of the total wavefield does not match our purpose, because the secondary surface waves excited on the basin boundaries will be contaminated by other undesirable waves. In this paper, we prove that, in principle, it is possible to extract surface waves excited on part of the basin boundaries from the total 3-D wavefield with a formulation that uses the reflection and transmission operators defined in the space domain. In realizing this extraction in the BEM algorithm, we encounter the problem arising from the lateral and vertical truncations of boundary surfaces extending infinitely in the half-space. To compensate the truncations, we first introduce an approximate algorithm using 2.5-D and 1-D wavefields for reference media, where a 2.5-D wavefield means a 3-D wavefield with a 2-D subsurface structure, and we then demonstrate the extraction. Finally, we calculate the secondary surface waves excited on the arc shape (horizontal section) of a vertical basin boundary subject to incident SH and SV plane waves propagating perpendicularly to the chord of the arc. As a result, we find that in the SH-incident case the Love waves are predominantly excited, rather than the Rayleigh waves and that in the SV-wave incident case the Love waves as well as the Rayleigh waves are excited. This suggests that the Love waves are more detectable than the Rayleigh waves in the horizontal components of observed recordings.
An advancing front Delaunay triangulation algorithm designed for robustness
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.
Natural Scales in Geographical Patterns
NASA Astrophysics Data System (ADS)
Menezes, Telmo; Roth, Camille
2017-04-01
Human mobility is known to be distributed across several orders of magnitude of physical distances, which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal.
Fast and Accurate Support Vector Machines on Large Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Narasimhan, Jayenthi; Holder, Larry
Support Vector Machines (SVM) is a supervised Machine Learning and Data Mining (MLDM) algorithm, which has become ubiquitous largely due to its high accuracy and obliviousness to dimensionality. The objective of SVM is to find an optimal boundary --- also known as hyperplane --- which separates the samples (examples in a dataset) of different classes by a maximum margin. Usually, very few samples contribute to the definition of the boundary. However, existing parallel algorithms use the entire dataset for finding the boundary, which is sub-optimal for performance reasons. In this paper, we propose a novel distributed memory algorithm to eliminatemore » the samples which do not contribute to the boundary definition in SVM. We propose several heuristics, which range from early (aggressive) to late (conservative) elimination of the samples, such that the overall time for generating the boundary is reduced considerably. In a few cases, a sample may be eliminated (shrunk) pre-emptively --- potentially resulting in an incorrect boundary. We propose a scalable approach to synchronize the necessary data structures such that the proposed algorithm maintains its accuracy. We consider the necessary trade-offs of single/multiple synchronization using in-depth time-space complexity analysis. We implement the proposed algorithm using MPI and compare it with libsvm--- de facto sequential SVM software --- which we enhance with OpenMP for multi-core/many-core parallelism. Our proposed approach shows excellent efficiency using up to 4096 processes on several large datasets such as UCI HIGGS Boson dataset and Offending URL dataset.« less
NASA Technical Reports Server (NTRS)
Han, Jongil; Arya, S. Pal; Shaohua, Shen; Lin, Yuh-Lang; Proctor, Fred H. (Technical Monitor)
2000-01-01
Algorithms are developed to extract atmospheric boundary layer profiles for turbulence kinetic energy (TKE) and energy dissipation rate (EDR), with data from a meteorological tower as input. The profiles are based on similarity theory and scalings for the atmospheric boundary layer. The calculated profiles of EDR and TKE are required to match the observed values at 5 and 40 m. The algorithms are coded for operational use and yield plausible profiles over the diurnal variation of the atmospheric boundary layer.
Automated feature extraction in color retinal images by a model based approach.
Li, Huiqi; Chutatape, Opas
2004-02-01
Color retinal photography is an important tool to detect the evidence of various eye diseases. Novel methods to extract the main features in color retinal images have been developed in this paper. Principal component analysis is employed to locate optic disk; A modified active shape model is proposed in the shape detection of optic disk; A fundus coordinate system is established to provide a better description of the features in the retinal images; An approach to detect exudates by the combined region growing and edge detection is proposed. The success rates of disk localization, disk boundary detection, and fovea localization are 99%, 94%, and 100%, respectively. The sensitivity and specificity of exudate detection are 100% and 71%, correspondingly. The success of the proposed algorithms can be attributed to the utilization of the model-based methods. The detection and analysis could be applied to automatic mass screening and diagnosis of the retinal diseases.
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Wang, Wei; Tan, He-Ping
2015-11-01
A hybrid least-square QR decomposition (LSQR)-particle swarm optimization (LSQR-PSO) algorithm was developed to estimate the three-dimensional (3D) temperature distributions and absorption coefficients simultaneously. The outgoing radiative intensities at the boundary surface of the absorbing media were simulated by the line-of-sight (LOS) method, which served as the input for the inverse analysis. The retrieval results showed that the 3D temperature distributions of the participating media with known radiative properties could be retrieved accurately using the LSQR algorithm, even with noisy data. For the participating media with unknown radiative properties, the 3D temperature distributions and absorption coefficients could be retrieved accurately using the LSQR-PSO algorithm even with measurement errors. It was also found that the temperature field could be estimated more accurately than the absorption coefficients. In order to gain insight into the effects on the accuracy of temperature distribution reconstruction, the selection of the detection direction and the angle between two detection directions was also analyzed. Project supported by the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), the National Natural Science Foundation of China (Grant No. 51476043), and the Fund of Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation University of China.
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Volakis, John L.; Jin, Jian-Ming
1990-01-01
A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary-integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principal advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Volakis, John L.
1989-01-01
A new technique is presented for computing the scattering by 2-D structures of arbitrary composition. The proposed solution approach combines the usual finite element method with the boundary integral equation to formulate a discrete system. This is subsequently solved via the conjugate gradient (CG) algorithm. A particular characteristic of the method is the use of rectangular boundaries to enclose the scatterer. Several of the resulting boundary integrals are therefore convolutions and may be evaluated via the fast Fourier transform (FFT) in the implementation of the CG algorithm. The solution approach offers the principle advantage of having O(N) memory demand and employs a 1-D FFT versus a 2-D FFT as required with a traditional implementation of the CGFFT algorithm. The speed of the proposed solution method is compared with that of the traditional CGFFT algorithm, and results for rectangular bodies are given and shown to be in excellent agreement with the moment method.
A novel spatial-temporal detection method of dim infrared moving small target
NASA Astrophysics Data System (ADS)
Chen, Zhong; Deng, Tao; Gao, Lei; Zhou, Heng; Luo, Song
2014-09-01
Moving small target detection under complex background in infrared image sequence is one of the major challenges of modern military in Early Warning Systems (EWS) and the use of Long-Range Strike (LRS). However, because of the low SNR and undulating background, the infrared moving small target detection is a difficult problem in a long time. To solve this problem, a novel spatial-temporal detection method based on bi-dimensional empirical mode decomposition (EMD) and time-domain difference is proposed in this paper. This method is downright self-data decomposition and do not rely on any transition kernel function, so it has a strong adaptive capacity. Firstly, we generalized the 1D EMD algorithm to the 2D case. In this process, the project has solved serial issues in 2D EMD, such as large amount of data operations, define and identify extrema in 2D case, and two-dimensional signal boundary corrosion. The EMD algorithm studied in this project can be well adapted to the automatic detection of small targets under low SNR and complex background. Secondly, considering the characteristics of moving target, we proposed an improved filtering method based on three-frame difference on basis of the original difference filtering in time-domain, which greatly improves the ability of anti-jamming algorithm. Finally, we proposed a new time-space fusion method based on a combined processing of 2D EMD and improved time-domain differential filtering. And, experimental results show that this method works well in infrared small moving target detection under low SNR and complex background.
Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye
2017-10-01
Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Enhanced Line Integral Convolution with Flow Feature Detection
NASA Technical Reports Server (NTRS)
Lane, David; Okada, Arthur
1996-01-01
The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.
Robot body self-modeling algorithm: a collision-free motion planning approach for humanoids.
Leylavi Shoushtari, Ali
2016-01-01
Motion planning for humanoid robots is one of the critical issues due to the high redundancy and theoretical and technical considerations e.g. stability, motion feasibility and collision avoidance. The strategies which central nervous system employs to plan, signal and control the human movements are a source of inspiration to deal with the mentioned problems. Self-modeling is a concept inspired by body self-awareness in human. In this research it is integrated in an optimal motion planning framework in order to detect and avoid collision of the manipulated object with the humanoid body during performing a dynamic task. Twelve parametric functions are designed as self-models to determine the boundary of humanoid's body. Later, the boundaries which mathematically defined by the self-models are employed to calculate the safe region for box to avoid the collision with the robot. Four different objective functions are employed in motion simulation to validate the robustness of algorithm under different dynamics. The results also confirm the collision avoidance, reality and stability of the predicted motion.
NASA Astrophysics Data System (ADS)
Limonova, Elena; Tropin, Daniil; Savelyev, Boris; Mamay, Igor; Nikolaev, Dmitry
2018-04-01
In this paper we describe stitching protocol, which allows to obtain high resolution images of long length monochromatic objects with periodic structure. This protocol can be used for long length documents or human-induced objects in satellite images of uninhabitable regions like Arctic regions. The length of such objects can reach notable values, while modern camera sensors have limited resolution and are not able to provide good enough image of the whole object for further processing, e.g. using in OCR system. The idea of the proposed method is to acquire a video stream containing full object in high resolution and use image stitching. We expect the scanned object to have straight boundaries and periodic structure, which allow us to introduce regularization to the stitching problem and adapt algorithm for limited computational power of mobile and embedded CPUs. With the help of detected boundaries and structure we estimate homography between frames and use this information to reduce complexity of stitching. We demonstrate our algorithm on mobile device and show image processing speed of 2 fps on Samsung Exynos 5422 processor
Guo, Junbin; Wang, Jianqiang; Guo, Xiaosong; Yu, Chuanqiang; Sun, Xiaoyan
2014-01-01
Preceding vehicle detection and tracking at nighttime are challenging problems due to the disturbance of other extraneous illuminant sources coexisting with the vehicle lights. To improve the detection accuracy and robustness of vehicle detection, a novel method for vehicle detection and tracking at nighttime is proposed in this paper. The characteristics of taillights in the gray level are applied to determine the lower boundary of the threshold for taillights segmentation, and the optimal threshold for taillight segmentation is calculated using the OTSU algorithm between the lower boundary and the highest grayscale of the region of interest. The candidate taillight pairs are extracted based on the similarity between left and right taillights, and the non-vehicle taillight pairs are removed based on the relevance analysis of vehicle location between frames. To reduce the false negative rate of vehicle detection, a vehicle tracking method based on taillights estimation is applied. The taillight spot candidate is sought in the region predicted by Kalman filtering, and the disturbed taillight is estimated based on the symmetry and location of the other taillight of the same vehicle. Vehicle tracking is completed after estimating its location according to the two taillight spots. The results of experiments on a vehicle platform indicate that the proposed method could detect vehicles quickly, correctly and robustly in the actual traffic environments with illumination variation. PMID:25195855
Guo, Junbin; Wang, Jianqiang; Guo, Xiaosong; Yu, Chuanqiang; Sun, Xiaoyan
2014-08-19
Preceding vehicle detection and tracking at nighttime are challenging problems due to the disturbance of other extraneous illuminant sources coexisting with the vehicle lights. To improve the detection accuracy and robustness of vehicle detection, a novel method for vehicle detection and tracking at nighttime is proposed in this paper. The characteristics of taillights in the gray level are applied to determine the lower boundary of the threshold for taillights segmentation, and the optimal threshold for taillight segmentation is calculated using the OTSU algorithm between the lower boundary and the highest grayscale of the region of interest. The candidate taillight pairs are extracted based on the similarity between left and right taillights, and the non-vehicle taillight pairs are removed based on the relevance analysis of vehicle location between frames. To reduce the false negative rate of vehicle detection, a vehicle tracking method based on taillights estimation is applied. The taillight spot candidate is sought in the region predicted by Kalman filtering, and the disturbed taillight is estimated based on the symmetry and location of the other taillight of the same vehicle. Vehicle tracking is completed after estimating its location according to the two taillight spots. The results of experiments on a vehicle platform indicate that the proposed method could detect vehicles quickly, correctly and robustly in the actual traffic environments with illumination variation.
Application of the perturbation iteration method to boundary layer type problems.
Pakdemirli, Mehmet
2016-01-01
The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.
NASA Technical Reports Server (NTRS)
Van Dalsem, W. R.; Steger, J. L.
1985-01-01
A simple and computationally efficient algorithm for solving the unsteady three-dimensional boundary-layer equations in the time-accurate or relaxation mode is presented. Results of the new algorithm are shown to be in quantitative agreement with detailed experimental data for flow over a swept infinite wing. The separated flow over a 6:1 ellipsoid at angle of attack, and the transonic flow over a finite-wing with shock-induced 'mushroom' separation are also computed and compared with available experimental data. It is concluded that complex, separated, three-dimensional viscous layers can be economically and routinely computed using a time-relaxation boundary-layer algorithm.
Privacy Protection Versus Cluster Detection in Spatial Epidemiology
Olson, Karen L.; Grannis, Shaun J.; Mandl, Kenneth D.
2006-01-01
Objectives. Patient data that includes precise locations can reveal patients’ identities, whereas data aggregated into administrative regions may preserve privacy and confidentiality. We investigated the effect of varying degrees of address precision (exact latitude and longitude vs the center points of zip code or census tracts) on detection of spatial clusters of cases. Methods. We simulated disease outbreaks by adding supplementary spatially clustered emergency department visits to authentic hospital emergency department syndromic surveillance data. We identified clusters with a spatial scan statistic and evaluated detection rate and accuracy. Results. More clusters were identified, and clusters were more accurately detected, when exact locations were used. That is, these clusters contained at least half of the simulated points and involved few additional emergency department visits. These results were especially apparent when the synthetic clustered points crossed administrative boundaries and fell into multiple zip code or census tracts. Conclusions. The spatial cluster detection algorithm performed better when addresses were analyzed as exact locations than when they were analyzed as center points of zip code or census tracts, particularly when the clustered points crossed administrative boundaries. Use of precise addresses offers improved performance, but this practice must be weighed against privacy concerns in the establishment of public health data exchange policies. PMID:17018828
NASA Astrophysics Data System (ADS)
Ganeshan, M.; Wu, D. L.
2014-12-01
Due to recent changes in the Arctic environment, it is important to monitor the atmospheric boundary layer (ABL) properties over the Arctic Ocean, especially to explore the variability in ABL clouds (such as sensitivity and feedback to sea ice loss). For example, radiosonde and satellite observations of the Arctic ABL height (and low-cloud cover) have recently suggested a positive response to sea ice loss during October that may not occur during the melt season (June-September). Owing to its high vertical and spatiotemporal resolution, an independent ABL height detection algorithm using GPS Radio Occultation (GPS-RO) refractivity in the Arctic is explored. Similar GPS-RO algorithms developed previously typically define the level of the most negative moisture gradient as the ABL height. This definition is favorable for subtropical oceans where a stratocumulus-topped ABL is often capped by a layer of sharp moisture lapse rate (coincident with the temperature inversion). The Arctic Ocean is also characterized by stratocumulus cloud cover, however, the specific humidity does not frequently decrease in the ABL capping inversion. The use of GPS-RO refractivity for ABL height retrieval therefore becomes more complex. During winter months (December-February), when the total precipitable water in the troposphere is a minimum, a fairly straightforward algorithm for ABL height retrieval is developed. The applicability and limitations of this method for other seasons (Spring, Summer, Fall) is determined. The seasonal, interannual and spatial variability in the GPS-derived ABL height over the Arctic Ocean, as well as its relation to the underlying surface (ice vs. water), is investigated. The GPS-RO profiles are also explored for the evidence of low-level moisture transport in the cold Arctic environment.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Abd-Elhameed, W. M.; Bassuony, M. A.
2013-03-01
This paper is concerned with spectral Galerkin algorithms for solving high even-order two point boundary value problems in one dimension subject to homogeneous and nonhomogeneous boundary conditions. The proposed algorithms are extended to solve two-dimensional high even-order differential equations. The key to the efficiency of these algorithms is to construct compact combinations of Chebyshev polynomials of the third and fourth kinds as basis functions. The algorithms lead to linear systems with specially structured matrices that can be efficiently inverted. Numerical examples are included to demonstrate the validity and applicability of the proposed algorithms, and some comparisons with some other methods are made.
Automated diagnosis of diabetic retinopathy and glaucoma using fundus and OCT images.
Pachiyappan, Arulmozhivarman; Das, Undurti N; Murthy, Tatavarti Vsp; Tatavarti, Rao
2012-06-13
We describe a system for the automated diagnosis of diabetic retinopathy and glaucoma using fundus and optical coherence tomography (OCT) images. Automatic screening will help the doctors to quickly identify the condition of the patient in a more accurate way. The macular abnormalities caused due to diabetic retinopathy can be detected by applying morphological operations, filters and thresholds on the fundus images of the patient. Early detection of glaucoma is done by estimating the Retinal Nerve Fiber Layer (RNFL) thickness from the OCT images of the patient. The RNFL thickness estimation involves the use of active contours based deformable snake algorithm for segmentation of the anterior and posterior boundaries of the retinal nerve fiber layer. The algorithm was tested on a set of 89 fundus images of which 85 were found to have at least mild retinopathy and OCT images of 31 patients out of which 13 were found to be glaucomatous. The accuracy for optical disk detection is found to be 97.75%. The proposed system therefore is accurate, reliable and robust and can be realized.
Automated diagnosis of diabetic retinopathy and glaucoma using fundus and OCT images
2012-01-01
We describe a system for the automated diagnosis of diabetic retinopathy and glaucoma using fundus and optical coherence tomography (OCT) images. Automatic screening will help the doctors to quickly identify the condition of the patient in a more accurate way. The macular abnormalities caused due to diabetic retinopathy can be detected by applying morphological operations, filters and thresholds on the fundus images of the patient. Early detection of glaucoma is done by estimating the Retinal Nerve Fiber Layer (RNFL) thickness from the OCT images of the patient. The RNFL thickness estimation involves the use of active contours based deformable snake algorithm for segmentation of the anterior and posterior boundaries of the retinal nerve fiber layer. The algorithm was tested on a set of 89 fundus images of which 85 were found to have at least mild retinopathy and OCT images of 31 patients out of which 13 were found to be glaucomatous. The accuracy for optical disk detection is found to be 97.75%. The proposed system therefore is accurate, reliable and robust and can be realized. PMID:22695250
Burchett, John; Shankar, Mohan; Hamza, A Ben; Guenther, Bob D; Pitsianis, Nikos; Brady, David J
2006-05-01
We use pyroelectric detectors that are differential in nature to detect motion in humans by their heat emissions. Coded Fresnel lens arrays create boundaries that help to localize humans in space as well as to classify the nature of their motion. We design and implement a low-cost biometric tracking system by using off-the-shelf components. We demonstrate two classification methods by using data gathered from sensor clusters of dual-element pyroelectric detectors with coded Fresnel lens arrays. We propose two algorithms for person identification, a more generalized spectral clustering method and a more rigorous example that uses principal component regression to perform a blind classification.
On dealing with multiple correlation peaks in PIV
NASA Astrophysics Data System (ADS)
Masullo, A.; Theunissen, R.
2018-05-01
A novel algorithm to analyse PIV images in the presence of strong in-plane displacement gradients and reduce sub-grid filtering is proposed in this paper. Interrogation windows subjected to strong in-plane displacement gradients often produce correlation maps presenting multiple peaks. Standard multi-grid procedures discard such ambiguous correlation windows using a signal to noise (SNR) filter. The proposed algorithm improves the standard multi-grid algorithm allowing the detection of splintered peaks in a correlation map through an automatic threshold, producing multiple displacement vectors for each correlation area. Vector locations are chosen by translating images according to the peak displacements and by selecting the areas with the strongest match. The method is assessed on synthetic images of a boundary layer of varying intensity and a sinusoidal displacement field of changing wavelength. An experimental case of a flow exhibiting strong velocity gradients is also provided to show the improvements brought by this technique.
NASA Astrophysics Data System (ADS)
Han, Byeongho; Seol, Soon Jee; Byun, Joongmoo
2012-04-01
To simulate wave propagation in a tilted transversely isotropic (TTI) medium with a tilting symmetry-axis of anisotropy, we develop a 2D elastic forward modelling algorithm. In this algorithm, we use the staggered-grid finite-difference method which has fourth-order accuracy in space and second-order accuracy in time. Since velocity-stress formulations are defined for staggered grids, we include auxiliary grid points in the z-direction to meet the free surface boundary conditions for shear stress. Through comparisons of displacements obtained from our algorithm, not only with analytical solutions but also with finite element solutions, we are able to validate that the free surface conditions operate appropriately and elastic waves propagate correctly. In order to handle the artificial boundary reflections efficiently, we also implement convolutional perfectly matched layer (CPML) absorbing boundaries in our algorithm. The CPML sufficiently attenuates energy at the grazing incidence by modifying the damping profile of the PML boundary. Numerical experiments indicate that the algorithm accurately expresses elastic wave propagation in the TTI medium. At the free surface, the numerical results show good agreement with analytical solutions not only for body waves but also for the Rayleigh wave which has strong amplitude along the surface. In addition, we demonstrate the efficiency of CPML for a homogeneous TI medium and a dipping layered model. Only using 10 grid points to the CPML regions, the artificial reflections are successfully suppressed and the energy of the boundary reflection back into the effective modelling area is significantly decayed.
Dai, W W; Marsili, P M; Martinez, E; Morucci, J P
1994-05-01
This paper presents a new version of the layer stripping algorithm in the sense that it works essentially by repeatedly stripping away the outermost layer of the medium after having determined the conductivity value in this layer. In order to stabilize the ill posed boundary value problem related to each layer, we base our algorithm on the Hilbert uniqueness method (HUM) and implement it with the boundary element method (BEM).
SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maitree, R; Guzman, G; Chundury, A
Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness ofmore » available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and qualitative assessment, the proposed approach has superior noise reduction and anatomical structures preservation capabilities over existing noise removal methods. Senior Author Dr. Deshan Yang received research funding form ViewRay and Varian.« less
Boundary condition identification for a grid model by experimental and numerical dynamic analysis
NASA Astrophysics Data System (ADS)
Mao, Qiang; Devitis, John; Mazzotti, Matteo; Bartoli, Ivan; Moon, Franklin; Sjoblom, Kurt; Aktan, Emin
2015-04-01
There is a growing need to characterize unknown foundations and assess substructures in existing bridges. It is becoming an important issue for the serviceability and safety of bridges as well as for the possibility of partial reuse of existing infrastructures. Within this broader contest, this paper investigates the possibility of identifying, locating and quantifying changes of boundary conditions, by leveraging a simply supported grid structure with a composite deck. Multi-reference impact tests are operated for the grid model and modification of one supporting bearing is done by replacing a steel cylindrical roller with a roller of compliant material. Impact based modal analysis provide global modal parameters such as damped natural frequencies, mode shapes and flexibility matrix that are used as indicators of boundary condition changes. An updating process combining a hybrid optimization algorithm and the finite element software suit ABAQUS is presented in this paper. The updated ABAQUS model of the grid that simulates the supporting bearing with springs is used to detect and quantify the change of the boundary conditions.
[Image processing applying in analysis of motion features of cultured cardiac myocyte in rat].
Teng, Qizhi; He, Xiaohai; Luo, Daisheng; Wang, Zhengrong; Zhou, Beiyi; Yuan, Zhirun; Tao, Dachang
2007-02-01
Study of mechanism of medicine actions, by quantitative analysis of cultured cardiac myocyte, is one of the cutting edge researches in myocyte dynamics and molecular biology. The characteristics of cardiac myocyte auto-beating without external stimulation make the research sense. Research of the morphology and cardiac myocyte motion using image analysis can reveal the fundamental mechanism of medical actions, increase the accuracy of medicine filtering, and design the optimal formula of medicine for best medical treatments. A system of hardware and software has been built with complete sets of functions including living cardiac myocyte image acquisition, image processing, motion image analysis, and image recognition. In this paper, theories and approaches are introduced for analysis of living cardiac myocyte motion images and implementing quantitative analysis of cardiac myocyte features. A motion estimation algorithm is used for motion vector detection of particular points and amplitude and frequency detection of a cardiac myocyte. Beatings of cardiac myocytes are sometimes very small. In such case, it is difficult to detect the motion vectors from the particular points in a time sequence of images. For this reason, an image correlation theory is employed to detect the beating frequencies. Active contour algorithm in terms of energy function is proposed to approximate the boundary and detect the changes of edge of myocyte.
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-08-01
On 6 February 2013, at 12:12:27 local time (01:12:27 UTC) a seismic event registering Mw 8.0 struck the Solomon Islands, located at the boundaries of the Australian and Pacific tectonic plates. Time series prediction is an important and widely interesting topic in the research of earthquake precursors. This paper describes a new computational intelligence approach to detect the unusual variations of the total electron content (TEC) seismo-ionospheric anomalies induced by the powerful Solomon earthquake using genetic algorithm (GA). The GA detected a considerable number of anomalous occurrences on earthquake day and also 7 and 8 days prior to the earthquake in a period of high geomagnetic activities. In this study, also the detected TEC anomalies using the proposed method are compared to the results dealing with the observed TEC anomalies by applying the mean, median, wavelet, Kalman filter, ARIMA, neural network and support vector machine methods. The accordance in the final results of all eight methods is a convincing indication for the efficiency of the GA method. It indicates that GA can be an appropriate non-parametric tool for anomaly detection in a non linear time series showing the seismo-ionospheric precursors variations.
Pawaskar, Sainath Shrikant; Fisher, John; Jin, Zhongmin
2010-03-01
Contact detection in cartilage contact mechanics is an important feature of any analytical or computational modeling investigation when the biphasic nature of cartilage and the corresponding tribology are taken into account. The fluid flow boundary conditions will change based on whether the surface is in contact or not, which will affect the interstitial fluid pressurization. This in turn will increase or decrease the load sustained by the fluid phase, with a direct effect on friction, wear, and lubrication. In laboratory experiments or clinical hemiarthroplasty, when a rigid indenter or metallic prosthesis is used to apply load to the cartilage, there will not be any fluid flow normal to the surface in the contact region due to the impermeable nature of the indenter/prosthesis. In the natural joint, on the other hand, where two cartilage surfaces interact, flow will depend on the pressure difference across the interface. Furthermore, in both these cases, the fluid would flow freely in non-contacting regions. However, it should be pointed out that the contact area is generally unknown in advance in both cases and can only be determined as part of the solution. In the present finite element study, a general and robust algorithm was proposed to decide nodes in contact on the cartilage surface and, accordingly, impose the fluid flow boundary conditions. The algorithm was first tested for a rigid indenter against cartilage model. The algorithm worked well for two-dimensional four-noded and eight-noded axisymmetric element models as well as three-dimensional models. It was then extended to include two cartilages in contact. The results were in excellent agreement with the previous studies reported in the literature.
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Karmali, M. S.
1983-01-01
This study was devoted to an investigation of the feasibility of applying advanced image processing techniques to enhance radar image characteristics that are pertinent to the pilot's navigation and guidance task. Millimeter (95 GHz) wave radar images for the overwater (i.e., offshore oil rigs) and overland (Heliport) scenario were used as a data base. The purpose of the study was to determine the applicability of image enhancement and scene analysis algorithms to detect and improve target characteristics (i.e., manmade objects such as buildings, parking lots, cars, roads, helicopters, towers, landing pads, etc.) that would be helpful to the pilot in determining his own position/orientation with respect to the outside world and assist him in the navigation task. Results of this study show that significant improvements in the raw radar image may be obtained using two dimensional image processing algorithms. In the overwater case, it is possible to remove the ocean clutter by thresholding the image data, and furthermore to extract the target boundary as well as the tower and catwalk locations using noise cleaning (e.g., median filter) and edge detection (e.g., Sobel operator) algorithms.
NASA Astrophysics Data System (ADS)
Kiyohara, Shin; Mizoguchi, Teruyasu
2018-03-01
Grain boundary segregation of dopants plays a crucial role in materials properties. To investigate the dopant segregation behavior at the grain boundary, an enormous number of combinations have to be considered in the segregation of multiple dopants at the complex grain boundary structures. Here, two data mining techniques, the random-forests regression and the genetic algorithm, were applied to determine stable segregation sites at grain boundaries efficiently. Using the random-forests method, a predictive model was constructed from 2% of the segregation configurations and it has been shown that this model could determine the stable segregation configurations. Furthermore, the genetic algorithm also successfully determined the most stable segregation configuration with great efficiency. We demonstrate that these approaches are quite effective to investigate the dopant segregation behaviors at grain boundaries.
Edge detection for optical synthetic aperture based on deep neural network
NASA Astrophysics Data System (ADS)
Tan, Wenjie; Hui, Mei; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin
2017-09-01
Synthetic aperture optics systems can meet the demands of the next-generation space telescopes being lighter, larger and foldable. However, the boundaries of segmented aperture systems are much more complex than that of the whole aperture. More edge regions mean more imaging edge pixels, which are often mixed and discretized. In order to achieve high-resolution imaging, it is necessary to identify the gaps between the sub-apertures and the edges of the projected fringes. In this work, we introduced the algorithm of Deep Neural Network into the edge detection of optical synthetic aperture imaging. According to the detection needs, we constructed image sets by experiments and simulations. Based on MatConvNet, a toolbox of MATLAB, we ran the neural network, trained it on training image set and tested its performance on validation set. The training was stopped when the test error on validation set stopped declining. As an input image is given, each intra-neighbor area around the pixel is taken into the network, and scanned pixel by pixel with the trained multi-hidden layers. The network outputs make a judgment on whether the center of the input block is on edge of fringes. We experimented with various pre-processing and post-processing techniques to reveal their influence on edge detection performance. Compared with the traditional algorithms or their improvements, our method makes decision on a much larger intra-neighbor, and is more global and comprehensive. Experiments on more than 2,000 images are also given to prove that our method outperforms classical algorithms in optical images-based edge detection.
A three-dimensional spectral algorithm for simulations of transition and turbulence
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1985-01-01
A spectral algorithm for simulating three dimensional, incompressible, parallel shear flows is described. It applies to the channel, to the parallel boundary layer, and to other shear flows with one wall bounded and two periodic directions. Representative applications to the channel and to the heated boundary layer are presented.
Spherical Harmonic Decomposition of Gravitational Waves Across Mesh Refinement Boundaries
NASA Technical Reports Server (NTRS)
Fiske, David R.; Baker, John; vanMeter, James R.; Centrella, Joan M.
2005-01-01
We evolve a linearized (Teukolsky) solution of the Einstein equations with a non-linear Einstein solver. Using this testbed, we are able to show that such gravitational waves, defined by the Weyl scalars in the Newman-Penrose formalism, propagate faithfully across mesh refinement boundaries, and use, for the first time to our knowledge, a novel algorithm due to Misner to compute spherical harmonic components of our waveforms. We show that the algorithm performs extremely well, even when the extraction sphere intersects refinement boundaries.
Accurate mask-based spatially regularized correlation filter for visual tracking
NASA Astrophysics Data System (ADS)
Gu, Xiaodong; Xu, Xinping
2017-01-01
Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.
Josse, G; George, J; Black, D
2011-08-01
Optical coherence tomography (OCT) is an imaging system that enables in vivo epidermal thickness (ET) measurement. In order to use OCT in large-scale clinical studies, automatic algorithm detection of the dermo-epidermal junction (DEJ) is needed. This may be difficult due to image noise from optical speckle, which requires specific image treatment procedures to reduce this. In the present work, a description of the position of the DEJ is given, and an algorithm for boundary detection is presented. Twenty-nine images were taken from the skin of normal healthy subjects, from five different body sites. Seven expert assessors were asked to trace the DEJ for ET measurement on each of the images. The variability between experts was compared with a new image processing method. Between-expert variability was relatively low with a mean standard deviation of 3.4 μm. However, local positioning of the DEJ between experts was often different. The described algorithm performed adequately on all images. ET was automatically measured with a precision of < 5 μm compared with the experts on all sites studied except that of the back. Moreover, the local algorithm positioning was verified. The new image processing method for measuring ET from OCT images significantly reduces calculation time for this parameter, and avoids user intervention. The main advantages of this are that data can be analyzed more rapidly and reproducibly in clinical trials. © 2011 John Wiley & Sons A/S.
Beevi, K Sabeena; Nair, Madhu S; Bindu, G R
2016-08-01
The exact measure of mitotic nuclei is a crucial parameter in breast cancer grading and prognosis. This can be achieved by improving the mitotic detection accuracy by careful design of segmentation and classification techniques. In this paper, segmentation of nuclei from breast histopathology images are carried out by Localized Active Contour Model (LACM) utilizing bio-inspired optimization techniques in the detection stage, in order to handle diffused intensities present along object boundaries. Further, the application of a new optimal machine learning algorithm capable of classifying strong non-linear data such as Random Kitchen Sink (RKS), shows improved classification performance. The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS) dataset provided for MITOS-ATYPIA CONTEST 2014. The proposed framework achieved 95% recall, 98% precision and 96% F-score.
Wolff-Parkinson-White (WPW) syndrome: the detection of delta wave in an electrocardiogram (ECG).
Mahamat, Hassan Adam; Jacquir, Sabir; Khalil, Cliff; Laurent, Gabriel; Binczak, Stephane
2016-08-01
The delta wave remains an important indicator to diagnose the WPW syndrome. In this paper, a new method of detection of delta wave in an ECG signal is proposed. Firstly, using the continuous wavelet transform, the P wave, the QRS complex and the T wave are detected, then their durations are computed after determination of the boundary location (onsets and offsets of the P, QRS and T waves). Secondly, the PR duration, the QRS duration and the upstroke of the QRS complex are used to determine the presence or absence of the delta wave. This algorithm has been tested on the Physionel database (ptbdb) in order to evaluate its robustness. It has been applied to clinical signals from patients affected by WPW syndrome. This method can provide assistance to practitioners in order to detect the WPW syndrome.
Principal curve detection in complicated graph images
NASA Astrophysics Data System (ADS)
Liu, Yuncai; Huang, Thomas S.
2001-09-01
Finding principal curves in an image is an important low level processing in computer vision and pattern recognition. Principal curves are those curves in an image that represent boundaries or contours of objects of interest. In general, a principal curve should be smooth with certain length constraint and allow either smooth or sharp turning. In this paper, we present a method that can efficiently detect principal curves in complicated map images. For a given feature image, obtained from edge detection of an intensity image or thinning operation of a pictorial map image, the feature image is first converted to a graph representation. In graph image domain, the operation of principal curve detection is performed to identify useful image features. The shortest path and directional deviation schemes are used in our algorithm os principal verve detection, which is proven to be very efficient working with real graph images.
NASA Astrophysics Data System (ADS)
Adame, Isabel M.; van der Geest, Rob J.; Wasserman, Bruce A.; Mohamed, Mona; Reiber, Johan H. C.; Lelieveldt, Boudewijn P. F.
2004-05-01
Composition and structure of atherosclerotic plaque is a primary focus of cardiovascular research. In vivo MRI provides a meanse to non-invasively image and assess the morphological features of athersclerotic and normal human carotid arteries. To quantitatively assess the vulnerability and the type of plaque, the contours of the lumen, outer boundary of the vessel wall and plaque components, need to be traced. To achieve this goal, we have developed an automated contou detection technique, which consists of three consecutive steps: firstly, the outer boundary of the vessel wall is detected by means of an ellipse-fitting procedure in order to obtain smoothed shapes; secondly, the lumen is segnented using fuzzy clustering. Thre region to be classified is that within the outer vessel wall boundary obtained from the previous step; finally, for plaque detection we follow the same approach as for lumen segmentation: fuzzy clustering. However, plaque is more difficult to segment, as the pixel gray value can differ considerably from one region to another, even when it corresponds to the same type of tissue. That makes further processing necessary. All these three steps might be carried out combining information from different sequences (PD-, T2-, T1-weighted images, pre- and post-contrast), to improve the contour detection. The algorithm has been validated in vivo on 58 high-resolution PD and T1 weighted MR images (19 patients). The results demonstrate excellent correspondence between automatic and manual area measurements: lumen (r=0.94), outer (r=0.92), and acceptable for fibrous cap thickness (r=0.76).
Lung fissure detection in CT images using global minimal paths
NASA Astrophysics Data System (ADS)
Appia, Vikram; Patil, Uday; Das, Bipul
2010-03-01
Pulmonary fissures separate human lungs into five distinct regions called lobes. Detection of fissure is essential for localization of the lobar distribution of lung diseases, surgical planning and follow-up. Treatment planning also requires calculation of the lobe volume. This volume estimation mandates accurate segmentation of the fissures. Presence of other structures (like vessels) near the fissure, along with its high variational probability in terms of position, shape etc. makes the lobe segmentation a challenging task. Also, false incomplete fissures and occurrence of diseases add to the complications of fissure detection. In this paper, we propose a semi-automated fissure segmentation algorithm using a minimal path approach on CT images. An energy function is defined such that the path integral over the fissure is the global minimum. Based on a few user defined points on a single slice of the CT image, the proposed algorithm minimizes a 2D energy function on the sagital slice computed using (a) intensity (b) distance of the vasculature, (c) curvature in 2D, (d) continuity in 3D. The fissure is the infimum energy path between a representative point on the fissure and nearest lung boundary point in this energy domain. The algorithm has been tested on 10 CT volume datasets acquired from GE scanners at multiple clinical sites. The datasets span through different pathological conditions and varying imaging artifacts.
COMOC: Three dimensional boundary region variant, programmer's manual
NASA Technical Reports Server (NTRS)
Orzechowski, J. A.; Baker, A. J.
1974-01-01
The three-dimensional boundary region variant of the COMOC computer program system solves the partial differential equation system governing certain three-dimensional flows of a viscous, heat conducting, multiple-species, compressible fluid including combustion. The solution is established in physical variables, using a finite element algorithm for the boundary value portion of the problem description in combination with an explicit marching technique for the initial value character. The computational lattice may be arbitrarily nonregular, and boundary condition constraints are readily applied. The theoretical foundation of the algorithm, a detailed description on the construction and operation of the program, and instructions on utilization of the many features of the code are presented.
Automated seeding-based nuclei segmentation in nonlinear optical microscopy.
Medyukhina, Anna; Meyer, Tobias; Heuke, Sandro; Vogler, Nadine; Dietzek, Benjamin; Popp, Jürgen
2013-10-01
Nonlinear optical (NLO) microscopy based, e.g., on coherent anti-Stokes Raman scattering (CARS) or two-photon-excited fluorescence (TPEF) is a fast label-free imaging technique, with a great potential for biomedical applications. However, NLO microscopy as a diagnostic tool is still in its infancy; there is a lack of robust and durable nuclei segmentation methods capable of accurate image processing in cases of variable image contrast, nuclear density, and type of investigated tissue. Nonetheless, such algorithms specifically adapted to NLO microscopy present one prerequisite for the technology to be routinely used, e.g., in pathology or intraoperatively for surgical guidance. In this paper, we compare the applicability of different seeding and boundary detection methods to NLO microscopic images in order to develop an optimal seeding-based approach capable of accurate segmentation of both TPEF and CARS images. Among different methods, the Laplacian of Gaussian filter showed the best accuracy for the seeding of the image, while a modified seeded watershed segmentation was the most accurate in the task of boundary detection. The resulting combination of these methods followed by the verification of the detected nuclei performs high average sensitivity and specificity when applied to various types of NLO microscopy images.
Towards automated segmentation of cells and cell nuclei in nonlinear optical microscopy.
Medyukhina, Anna; Meyer, Tobias; Schmitt, Michael; Romeike, Bernd F M; Dietzek, Benjamin; Popp, Jürgen
2012-11-01
Nonlinear optical (NLO) imaging techniques based e.g. on coherent anti-Stokes Raman scattering (CARS) or two photon excited fluorescence (TPEF) show great potential for biomedical imaging. In order to facilitate the diagnostic process based on NLO imaging, there is need for an automated calculation of quantitative values such as cell density, nucleus-to-cytoplasm ratio, average nuclear size. Extraction of these parameters is helpful for the histological assessment in general and specifically e.g. for the determination of tumor grades. This requires an accurate image segmentation and detection of locations and boundaries of cells and nuclei. Here we present an image processing approach for the detection of nuclei and cells in co-registered TPEF and CARS images. The algorithm developed utilizes the gray-scale information for the detection of the nuclei locations and the gradient information for the delineation of the nuclear and cellular boundaries. The approach reported is capable for an automated segmentation of cells and nuclei in multimodal TPEF-CARS images of human brain tumor samples. The results are important for the development of NLO microscopy into a clinically relevant diagnostic tool. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Inference of boundaries in causal sets
NASA Astrophysics Data System (ADS)
Cunningham, William J.
2018-05-01
We investigate the extrinsic geometry of causal sets in (1+1) -dimensional Minkowski spacetime. The properties of boundaries in an embedding space can be used not only to measure observables, but also to supplement the discrete action in the partition function via discretized Gibbons–Hawking–York boundary terms. We define several ways to represent a causal set using overlapping subsets, which then allows us to distinguish between null and non-null bounding hypersurfaces in an embedding space. We discuss algorithms to differentiate between different types of regions, consider when these distinctions are possible, and then apply the algorithms to several spacetime regions. Numerical results indicate the volumes of timelike boundaries can be measured to within 0.5% accuracy for flat boundaries and within 10% accuracy for highly curved boundaries for medium-sized causal sets with N = 214 spacetime elements.
An image-space parallel convolution filtering algorithm based on shadow map
NASA Astrophysics Data System (ADS)
Li, Hua; Yang, Huamin; Zhao, Jianping
2017-07-01
Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.
NASA Astrophysics Data System (ADS)
Gierens, Rosa T.; Henriksson, Svante; Josipovic, Micky; Vakkari, Ville; van Zyl, Pieter G.; Beukes, Johan P.; Wood, Curtis R.; O'Connor, Ewan J.
2018-05-01
The atmospheric boundary layer (BL) is the atmospheric layer coupled to the Earth's surface at relatively short timescales. A key quantity is the BL depth, which is important in many applied areas of weather and climate such as air-quality forecasting. Studying BLs in climates and biomes across the globe is important, particularly in the under-sampled southern hemisphere. The present study is based on a grazed grassland-savannah area in northwestern South Africa during October 2012-August 2014. Ceilometers are probably the cheapest method for measuring continuous aerosol profiles up to several kilometers above ground and are thus an ideal tool for long-term studies of BLs. A ceilometer-estimated BL depth is based on profiles of attenuated backscattering coefficients from atmospheric aerosols; the sharpest drop often occurs at BL top. Based on this, we developed a new method for layer detection that we call the signal-limited layer method. The new algorithm was applied to ceilometer profiles which thus classified BL into classic regime types: daytime convective mixing, and a double layer at night of surface-based stable with a residual layer above it. We employed wavelet fitting to increase successful BL estimation for noisy profiles. The layer-detection algorithm was supported by an eddy-flux station, rain gauges, and manual inspection. Diurnal cycles were often clear, with BL depth detected for 50% of the daytime typically being 1-3 km, and for 80% of the night-time typically being a few hundred meters. Variability was also analyzed with respect to seasons and years. Finally, BL depths were compared with ERA-Interim estimates of BL depth to show reassuring agreement.
Classifying seismic noise and sources from OBS data using unsupervised machine learning
NASA Astrophysics Data System (ADS)
Mosher, S. G.; Audet, P.
2017-12-01
The paradigm of plate tectonics was established mainly by recognizing the central role of oceanic plates in the production and destruction of tectonic plates at their boundaries. Since that realization, however, seismic studies of tectonic plates and their associated deformation have slowly shifted their attention toward continental plates due to the ease of installation and maintenance of high-quality seismic networks on land. The result has been a much more detailed understanding of the seismicity patterns associated with continental plate deformation in comparison with the low-magnitude deformation patterns within oceanic plates and at their boundaries. While the number of high-quality ocean-bottom seismometer (OBS) deployments within the past decade has demonstrated the potential to significantly increase our understanding of tectonic systems in oceanic settings, OBS data poses significant challenges to many of the traditional data processing techniques in seismology. In particular, problems involving the detection, location, and classification of seismic sources occurring within oceanic settings are much more difficult due to the extremely noisy seafloor environment in which data are recorded. However, classifying data without a priori constraints is a problem that is routinely pursued via unsupervised machine learning algorithms, which remain robust even in cases involving complicated datasets. In this research, we apply simple unsupervised machine learning algorithms (e.g., clustering) to OBS data from the Cascadia Initiative in an attempt to classify and detect a broad range of seismic sources, including various noise sources and tremor signals occurring within ocean settings.
Discriminative Cooperative Networks for Detecting Phase Transitions
NASA Astrophysics Data System (ADS)
Liu, Ye-Hua; van Nieuwenburg, Evert P. L.
2018-04-01
The classification of states of matter and their corresponding phase transitions is a special kind of machine-learning task, where physical data allow for the analysis of new algorithms, which have not been considered in the general computer-science setting so far. Here we introduce an unsupervised machine-learning scheme for detecting phase transitions with a pair of discriminative cooperative networks (DCNs). In this scheme, a guesser network and a learner network cooperate to detect phase transitions from fully unlabeled data. The new scheme is efficient enough for dealing with phase diagrams in two-dimensional parameter spaces, where we can utilize an active contour model—the snake—from computer vision to host the two networks. The snake, with a DCN "brain," moves and learns actively in the parameter space, and locates phase boundaries automatically.
Algebraic grid generation using tensor product B-splines. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Saunders, B. V.
1985-01-01
Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.
Convolution neural-network-based detection of lung structures
NASA Astrophysics Data System (ADS)
Hasegawa, Akira; Lo, Shih-Chung B.; Freedman, Matthew T.; Mun, Seong K.
1994-05-01
Chest radiography is one of the most primary and widely used techniques in diagnostic imaging. Nowadays with the advent of digital radiology, the digital medical image processing techniques for digital chest radiographs have attracted considerable attention, and several studies on the computer-aided diagnosis (CADx) as well as on the conventional image processing techniques for chest radiographs have been reported. In the automatic diagnostic process for chest radiographs, it is important to outline the areas of the lungs, the heart, and the diaphragm. This is because the original chest radiograph is composed of important anatomic structures and, without knowing exact positions of the organs, the automatic diagnosis may result in unexpected detections. The automatic extraction of an anatomical structure from digital chest radiographs can be a useful tool for (1) the evaluation of heart size, (2) automatic detection of interstitial lung diseases, (3) automatic detection of lung nodules, and (4) data compression, etc. Based on the clearly defined boundaries of heart area, rib spaces, rib positions, and rib cage extracted, one should be able to use this information to facilitate the tasks of the CADx on chest radiographs. In this paper, we present an automatic scheme for the detection of lung field from chest radiographs by using a shift-invariant convolution neural network. A novel algorithm for smoothing boundaries of lungs is also presented.
Ferrario, Damien; Grychtol, Bartłomiej; Adler, Andy; Solà, Josep; Böhm, Stephan H; Bodenstein, Marc
2012-11-01
Lung and cardiovascular monitoring applications of electrical impedance tomography (EIT) require localization of relevant functional structures or organs of interest within the reconstructed images. We describe an algorithm for automatic detection of heart and lung regions in a time series of EIT images. Using EIT reconstruction based on anatomical models, candidate regions are identified in the frequency domain and image-based classification techniques applied. The algorithm was validated on a set of simultaneously recorded EIT and CT data in pigs. In all cases, identified regions in EIT images corresponded to those manually segmented in the matched CT image. Results demonstrate the ability of EIT technology to reconstruct relevant impedance changes at their anatomical locations, provided that information about the thoracic boundary shape (and electrode positions) are used for reconstruction.
SAR Image Change Detection Based on Fuzzy Markov Random Field Model
NASA Astrophysics Data System (ADS)
Zhao, J.; Huang, G.; Zhao, Z.
2018-04-01
Most existing SAR image change detection algorithms only consider single pixel information of different images, and not consider the spatial dependencies of image pixels. So the change detection results are susceptible to image noise, and the detection effect is not ideal. Markov Random Field (MRF) can make full use of the spatial dependence of image pixels and improve detection accuracy. When segmenting the difference image, different categories of regions have a high degree of similarity at the junction of them. It is difficult to clearly distinguish the labels of the pixels near the boundaries of the judgment area. In the traditional MRF method, each pixel is given a hard label during iteration. So MRF is a hard decision in the process, and it will cause loss of information. This paper applies the combination of fuzzy theory and MRF to the change detection of SAR images. The experimental results show that the proposed method has better detection effect than the traditional MRF method.
Parameter estimation for chaotic systems using improved bird swarm algorithm
NASA Astrophysics Data System (ADS)
Xu, Chuangbiao; Yang, Renhuan
2017-12-01
Parameter estimation of chaotic systems is an important problem in nonlinear science and has aroused increasing interest of many research fields, which can be basically reduced to a multidimensional optimization problem. In this paper, an improved boundary bird swarm algorithm is used to estimate the parameters of chaotic systems. This algorithm can combine the good global convergence and robustness of the bird swarm algorithm and the exploitation capability of improved boundary learning strategy. Experiments are conducted on the Lorenz system and the coupling motor system. Numerical simulation results reveal the effectiveness and with desirable performance of IBBSA for parameter estimation of chaotic systems.
Yoon, Young-Gyu; Dai, Peilun; Wohlwend, Jeremy; Chang, Jae-Byum; Marblestone, Adam H.; Boyden, Edward S.
2017-01-01
We here introduce and study the properties, via computer simulation, of a candidate automated approach to algorithmic reconstruction of dense neural morphology, based on simulated data of the kind that would be obtained via two emerging molecular technologies—expansion microscopy (ExM) and in-situ molecular barcoding. We utilize a convolutional neural network to detect neuronal boundaries from protein-tagged plasma membrane images obtained via ExM, as well as a subsequent supervoxel-merging pipeline guided by optical readout of information-rich, cell-specific nucleic acid barcodes. We attempt to use conservative imaging and labeling parameters, with the goal of establishing a baseline case that points to the potential feasibility of optical circuit reconstruction, leaving open the possibility of higher-performance labeling technologies and algorithms. We find that, even with these conservative assumptions, an all-optical approach to dense neural morphology reconstruction may be possible via the proposed algorithmic framework. Future work should explore both the design-space of chemical labels and barcodes, as well as algorithms, to ultimately enable routine, high-performance optical circuit reconstruction. PMID:29114215
Yoon, Young-Gyu; Dai, Peilun; Wohlwend, Jeremy; Chang, Jae-Byum; Marblestone, Adam H; Boyden, Edward S
2017-01-01
We here introduce and study the properties, via computer simulation, of a candidate automated approach to algorithmic reconstruction of dense neural morphology, based on simulated data of the kind that would be obtained via two emerging molecular technologies-expansion microscopy (ExM) and in-situ molecular barcoding. We utilize a convolutional neural network to detect neuronal boundaries from protein-tagged plasma membrane images obtained via ExM, as well as a subsequent supervoxel-merging pipeline guided by optical readout of information-rich, cell-specific nucleic acid barcodes. We attempt to use conservative imaging and labeling parameters, with the goal of establishing a baseline case that points to the potential feasibility of optical circuit reconstruction, leaving open the possibility of higher-performance labeling technologies and algorithms. We find that, even with these conservative assumptions, an all-optical approach to dense neural morphology reconstruction may be possible via the proposed algorithmic framework. Future work should explore both the design-space of chemical labels and barcodes, as well as algorithms, to ultimately enable routine, high-performance optical circuit reconstruction.
Formulation and Implementation of Inflow/Outflow Boundary Conditions to Simulate Propulsive Effects
NASA Technical Reports Server (NTRS)
Rodriguez, David L.; Aftosmis, Michael J.; Nemec, Marian
2018-01-01
Boundary conditions appropriate for simulating flow entering or exiting the computational domain to mimic propulsion effects have been implemented in an adaptive Cartesian simulation package. A robust iterative algorithm to control mass flow rate through an outflow boundary surface is presented, along with a formulation to explicitly specify mass flow rate through an inflow boundary surface. The boundary conditions have been applied within a mesh adaptation framework based on the method of adjoint-weighted residuals. This allows for proper adaptive mesh refinement when modeling propulsion systems. The new boundary conditions are demonstrated on several notional propulsion systems operating in flow regimes ranging from low subsonic to hypersonic. The examples show that the prescribed boundary state is more properly imposed as the mesh is refined. The mass-flowrate steering algorithm is shown to be an efficient approach in each example. To demonstrate the boundary conditions on a realistic complex aircraft geometry, two of the new boundary conditions are also applied to a modern low-boom supersonic demonstrator design with multiple flow inlets and outlets.
Wilkes, Daniel R; Duncan, Alec J
2015-04-01
This paper presents a numerical model for the acoustic coupled fluid-structure interaction (FSI) of a submerged finite elastic body using the fast multipole boundary element method (FMBEM). The Helmholtz and elastodynamic boundary integral equations (BIEs) are, respectively, employed to model the exterior fluid and interior solid domains, and the pressure and displacement unknowns are coupled between conforming meshes at the shared boundary interface to achieve the acoustic FSI. The low frequency FMBEM is applied to both BIEs to reduce the algorithmic complexity of the iterative solution from O(N(2)) to O(N(1.5)) operations per matrix-vector product for N boundary unknowns. Numerical examples are presented to demonstrate the algorithmic and memory complexity of the method, which are shown to be in good agreement with the theoretical estimates, while the solution accuracy is comparable to that achieved by a conventional finite element-boundary element FSI model.
Design of compactly supported wavelet to match singularities in medical images
NASA Astrophysics Data System (ADS)
Fung, Carrson C.; Shi, Pengcheng
2002-11-01
Analysis and understanding of medical images has important clinical values for patient diagnosis and treatment, as well as technical implications for computer vision and pattern recognition. One of the most fundamental issues is the detection of object boundaries or singularities, which is often the basis for further processes such as organ/tissue recognition, image registration, motion analysis, measurement of anatomical and physiological parameters, etc. The focus of this work involved taking a correlation based approach toward edge detection, by exploiting some of desirable properties of wavelet analysis. This leads to the possibility of constructing a bank of detectors, consisting of multiple wavelet basis functions of different scales which are optimal for specific types of edges, in order to optimally detect all the edges in an image. Our work involved developing a set of wavelet functions which matches the shape of the ramp and pulse edges. The matching algorithm used focuses on matching the edges in the frequency domain. It was proven that this technique could create matching wavelets applicable at all scales. Results have shown that matching wavelets can be obtained for the pulse edge while the ramp edge requires another matching algorithm.
High order solution of Poisson problems with piecewise constant coefficients and interface jumps
NASA Astrophysics Data System (ADS)
Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben
2017-04-01
We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.
NASA Astrophysics Data System (ADS)
Poltera, Yann; Martucci, Giovanni; Hervo, Maxime; Haefele, Alexander; Emmenegger, Lukas; Brunner, Dominik; Henne, stephan
2016-04-01
We have developed, applied and validated a novel algorithm called PathfinderTURB for the automatic and real-time detection of the vertical structure of the planetary boundary layer. The algorithm has been applied to a year of data measured by the automatic LIDAR CHM15K at two sites in Switzerland: the rural site of Payerne (MeteoSwiss station, 491 m, asl), and the alpine site of Kleine Scheidegg (KSE, 2061 m, asl). PathfinderTURB is a gradient-based layer detection algorithm, which in addition makes use of the atmospheric variability to detect the turbulent transition zone that separates two low-turbulence regions, one characterized by homogeneous mixing (convective layer) and one above characterized by free tropospheric conditions. The PathfinderTURB retrieval of the vertical structure of the Local (5-10 km, horizontal scale) Convective Boundary Layer (LCBL) has been validated at Payerne using two established reference methods. The first reference consists of four independent human-expert manual detections of the LCBL height over the year 2014. The second reference consists of the values of LCBL height calculated using the bulk Richardson number method based on co-located radio sounding data for the same year 2014. Based on the excellent agreement with the two reference methods at Payerne, we decided to apply PathfinderTURB to the complex-terrain conditions at KSE during 2014. The LCBL height retrievals are obtained by tilting the CHM15K at an angle of 19 degrees with respect to the horizontal and aiming directly at the Sphinx Observatory (3580 m, asl) on the Jungfraujoch. This setup of the CHM15K and the processing of the data done by the PathfinderTURB allows to disentangle the long-transport from the local origin of gases and particles measured by the in-situ instrumentation at the Sphinx Observatory. The KSE measurements showed that the relation amongst the LCBL height, the aerosol layers above the LCBL top and the gas + particle concentration is all but trivial. Retrieving the structure of the LCBL along the line of sight connecting KSE to the Sphinx Observatory allows to monitor when the LCBL top reaches the altitude of the in-situ instrumentation at the Sphinx and to relate the measured gas + particle concentration with the locally-produced aerosols. On the other hand, when the LCBL top is lower than the Sphinx altitude, the measured concentration of gas + particle at the Sphinx is either due to long transport of aerosols (>100 km) or to the residual aerosol layer reaching the Sphinx's height or to non-local (> 5 km and <100 km) CBL aerosols advected at the Sphinx's height. Except when the aerosol layer is decoupled from the LCBL underneath, for all the other cases the CHM15K sees the probed layer as a continuous (not necessarily well-mixed) aerosol layer starting at the KSE surface. The depth of this continuous layer has been retrieved by the PathfinderTURB and related with the black carbon absorption coefficient measured at Sphinx. The result of the comparison shows clearly that the depth of the layer is well correlated with the absorption coefficient measured at the Sphinx. This is an important result that allows not only to retrieve real-time CBL heights in an automatic and trustworthy way, but also to adapt the retrievals to complex-terrain and complex-atmospheric conditions with customized tilted instrument settings.
A fourth-order Cartesian grid embeddedboundary method for Poisson’s equation
Devendran, Dharshi; Graves, Daniel; Johansen, Hans; ...
2017-05-08
In this paper, we present a fourth-order algorithm to solve Poisson's equation in two and three dimensions. We use a Cartesian grid, embedded boundary method to resolve complex boundaries. We use a weighted least squares algorithm to solve for our stencils. We use convergence tests to demonstrate accuracy and we show the eigenvalues of the operator to demonstrate stability. We compare accuracy and performance with an established second-order algorithm. We also discuss in depth strategies for retaining higher-order accuracy in the presence of nonsmooth geometries.
A fourth-order Cartesian grid embeddedboundary method for Poisson’s equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devendran, Dharshi; Graves, Daniel; Johansen, Hans
In this paper, we present a fourth-order algorithm to solve Poisson's equation in two and three dimensions. We use a Cartesian grid, embedded boundary method to resolve complex boundaries. We use a weighted least squares algorithm to solve for our stencils. We use convergence tests to demonstrate accuracy and we show the eigenvalues of the operator to demonstrate stability. We compare accuracy and performance with an established second-order algorithm. We also discuss in depth strategies for retaining higher-order accuracy in the presence of nonsmooth geometries.
Technique for Chestband Contour Shape-Mapping in Lateral Impact
Hallman, Jason J; Yoganandan, Narayan; Pintar, Frank A
2011-01-01
The chestband transducer permits noninvasive measurement of transverse plane biomechanical response during blunt thorax impact. Although experiments may reveal complex two-dimensional (2D) deformation response to boundary conditions, biomechanical studies have heretofore employed only uniaxial chestband contour quantifying measurements. The present study described and evaluated an algorithm by which source subject-specific contour data may be systematically mapped to a target generalized anthropometry for computational studies of biomechanical response or anthropomorphic test dummy development. Algorithm performance was evaluated using chestband contour datasets from two rigid lateral impact boundary conditions: Flat wall and anterior-oblique wall. Comparing source and target anthropometry contours, peak deflections and deformation-time traces deviated by less than 4%. These results suggest that the algorithm is appropriate for 2D deformation response to lateral impact boundary conditions. PMID:21676399
Multi-stage learning for robust lung segmentation in challenging CT volumes.
Sofka, Michal; Wetzl, Jens; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin
2011-01-01
Simple algorithms for segmenting healthy lung parenchyma in CT are unable to deal with high density tissue common in pulmonary diseases. To overcome this problem, we propose a multi-stage learning-based approach that combines anatomical information to predict an initialization of a statistical shape model of the lungs. The initialization first detects the carina of the trachea, and uses this to detect a set of automatically selected stable landmarks on regions near the lung (e.g., ribs, spine). These landmarks are used to align the shape model, which is then refined through boundary detection to obtain fine-grained segmentation. Robustness is obtained through hierarchical use of discriminative classifiers that are trained on a range of manually annotated data of diseased and healthy lungs. We demonstrate fast detection (35s per volume on average) and segmentation of 2 mm accuracy on challenging data.
1977-09-01
Interpolation algorithm allows this to be done when the transition boundaries are defined close together and parallel to one another. In this case the...in the variable kernel esti- -mates.) In [2] a goodness-of-fit criterion for a set of sam- One question of great interest to us in this study pies...an estimate /(x) is For the unimodal case the ab.olute minimum okV .based on the variables ocurs at k .= 100, ce 5. At this point we have j Mean
An Automatic Phase-Change Detection Technique for Colloidal Hard Sphere Suspensions
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth; Rogers, Richard B.
2005-01-01
Colloidal suspensions of monodisperse spheres are used as physical models of thermodynamic phase transitions and as precursors to photonic band gap materials. However, current image analysis techniques are not able to distinguish between densely packed phases within conventional microscope images, which are mainly characterized by degrees of randomness or order with similar grayscale value properties. Current techniques for identifying the phase boundaries involve manually identifying the phase transitions, which is very tedious and time consuming. We have developed an intelligent machine vision technique that automatically identifies colloidal phase boundaries. The algorithm utilizes intelligent image processing techniques that accurately identify and track phase changes vertically or horizontally for a sequence of colloidal hard sphere suspension images. This technique is readily adaptable to any imaging application where regions of interest are distinguished from the background by differing patterns of motion over time.
Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling
NASA Astrophysics Data System (ADS)
Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan
2018-01-01
In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.
Efficient Skeletonization of Volumetric Objects.
Zhou, Yong; Toga, Arthur W
1999-07-01
Skeletonization promises to become a powerful tool for compact shape description, path planning, and other applications. However, current techniques can seldom efficiently process real, complicated 3D data sets, such as MRI and CT data of human organs. In this paper, we present an efficient voxel-coding based algorithm for Skeletonization of 3D voxelized objects. The skeletons are interpreted as connected centerlines. consisting of sequences of medial points of consecutive clusters. These centerlines are initially extracted as paths of voxels, followed by medial point replacement, refinement, smoothness, and connection operations. The voxel-coding techniques have been proposed for each of these operations in a uniform and systematic fashion. In addition to preserving basic connectivity and centeredness, the algorithm is characterized by straightforward computation, no sensitivity to object boundary complexity, explicit extraction of ready-to-parameterize and branch-controlled skeletons, and efficient object hole detection. These issues are rarely discussed in traditional methods. A range of 3D medical MRI and CT data sets were used for testing the algorithm, demonstrating its utility.
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver’s visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible. PMID:22164117
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver's visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible.
Gao, Shan; van 't Klooster, Ronald; Brandts, Anne; Roes, Stijntje D; Alizadeh Dehnavi, Reza; de Roos, Albert; Westenberg, Jos J M; van der Geest, Rob J
2017-01-01
To develop and evaluate a method that can fully automatically identify the vessel wall boundaries and quantify the wall thickness for both common carotid artery (CCA) and descending aorta (DAO) from axial magnetic resonance (MR) images. 3T MRI data acquired with T 1 -weighted gradient-echo black-blood imaging sequence from carotid (39 subjects) and aorta (39 subjects) were used to develop and test the algorithm. The vessel wall segmentation was achieved by respectively fitting a 3D cylindrical B-spline surface to the boundaries of lumen and outer wall. The tube-fitting was based on the edge detection performed on the signal intensity (SI) profile along the surface normal. To achieve a fully automated process, Hough Transform (HT) was developed to estimate the lumen centerline and radii for the target vessel. Using the outputs of HT, a tube model for lumen segmentation was initialized and deformed to fit the image data. Finally, lumen segmentation was dilated to initiate the adaptation procedure of outer wall tube. The algorithm was validated by determining: 1) its performance against manual tracing; 2) its interscan reproducibility in quantifying vessel wall thickness (VWT); 3) its capability of detecting VWT difference in hypertensive patients compared with healthy controls. Statistical analysis including Bland-Altman analysis, t-test, and sample size calculation were performed for the purpose of algorithm evaluation. The mean distance between the manual and automatically detected lumen/outer wall contours was 0.00 ± 0.23/0.09 ± 0.21 mm for CCA and 0.12 ± 0.24/0.14 ± 0.35 mm for DAO. No significant difference was observed between the interscan VWT assessment using automated segmentation for both CCA (P = 0.19) and DAO (P = 0.94). Both manual and automated segmentation detected significantly higher carotid (P = 0.016 and P = 0.005) and aortic (P < 0.001 and P = 0.021) wall thickness in the hypertensive patients. A reliable and reproducible pipeline for fully automatic vessel wall quantification was developed and validated on healthy volunteers as well as patients with increased vessel wall thickness. This method holds promise for helping in efficient image interpretation for large-scale cohort studies. 4 J. Magn. Reson. Imaging 2017;45:215-228. © 2016 International Society for Magnetic Resonance in Medicine.
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
Text extraction via an edge-bounded averaging and a parametric character model
NASA Astrophysics Data System (ADS)
Fan, Jian
2003-01-01
We present a deterministic text extraction algorithm that relies on three basic assumptions: color/luminance uniformity of the interior region, closed boundaries of sharp edges and the consistency of local contrast. The algorithm is basically independent of the character alphabet, text layout, font size and orientation. The heart of this algorithm is an edge-bounded averaging for the classification of smooth regions that enhances robustness against noise without sacrificing boundary accuracy. We have also developed a verification process to clean up the residue of incoherent segmentation. Our framework provides a symmetric treatment for both regular and inverse text. We have proposed three heuristics for identifying the type of text from a cluster consisting of two types of pixel aggregates. Finally, we have demonstrated the advantages of the proposed algorithm over adaptive thresholding and block-based clustering methods in terms of boundary accuracy, segmentation coherency, and capability to identify inverse text and separate characters from background patches.
Anomaly Detection in Test Equipment via Sliding Mode Observers
NASA Technical Reports Server (NTRS)
Solano, Wanda M.; Drakunov, Sergey V.
2012-01-01
Nonlinear observers were originally developed based on the ideas of variable structure control, and for the purpose of detecting disturbances in complex systems. In this anomaly detection application, these observers were designed for estimating the distributed state of fluid flow in a pipe described by a class of advection equations. The observer algorithm uses collected data in a piping system to estimate the distributed system state (pressure and velocity along a pipe containing liquid gas propellant flow) using only boundary measurements. These estimates are then used to further estimate and localize possible anomalies such as leaks or foreign objects, and instrumentation metering problems such as incorrect flow meter orifice plate size. The observer algorithm has the following parts: a mathematical model of the fluid flow, observer control algorithm, and an anomaly identification algorithm. The main functional operation of the algorithm is in creating the sliding mode in the observer system implemented as software. Once the sliding mode starts in the system, the equivalent value of the discontinuous function in sliding mode can be obtained by filtering out the high-frequency chattering component. In control theory, "observers" are dynamic algorithms for the online estimation of the current state of a dynamic system by measurements of an output of the system. Classical linear observers can provide optimal estimates of a system state in case of uncertainty modeled by white noise. For nonlinear cases, the theory of nonlinear observers has been developed and its success is mainly due to the sliding mode approach. Using the mathematical theory of variable structure systems with sliding modes, the observer algorithm is designed in such a way that it steers the output of the model to the output of the system obtained via a variety of sensors, in spite of possible mismatches between the assumed model and actual system. The unique properties of sliding mode control allow not only control of the model internal states to the states of the real-life system, but also identification of the disturbance or anomaly that may occur.
ΤND: a thyroid nodule detection system for analysis of ultrasound images and videos.
Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K
2012-06-01
In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.
Unweighted least squares phase unwrapping by means of multigrid techniques
NASA Astrophysics Data System (ADS)
Pritt, Mark D.
1995-11-01
We present a multigrid algorithm for unweighted least squares phase unwrapping. This algorithm applies Gauss-Seidel relaxation schemes to solve the Poisson equation on smaller, coarser grids and transfers the intermediate results to the finer grids. This approach forms the basis of our multigrid algorithm for weighted least squares phase unwrapping, which is described in a separate paper. The key idea of our multigrid approach is to maintain the partial derivatives of the phase data in separate arrays and to correct these derivatives at the boundaries of the coarser grids. This maintains the boundary conditions necessary for rapid convergence to the correct solution. Although the multigrid algorithm is an iterative algorithm, we demonstrate that it is nearly as fast as the direct Fourier-based method. We also describe how to parallelize the algorithm for execution on a distributed-memory parallel processor computer or a network-cluster of workstations.
NASA Astrophysics Data System (ADS)
Poltera, Yann; Martucci, Giovanni; Collaud Coen, Martine; Hervo, Maxime; Emmenegger, Lukas; Henne, Stephan; Brunner, Dominik; Haefele, Alexander
2017-08-01
We present the development of the PathfinderTURB algorithm for the analysis of ceilometer backscatter data and the real-time detection of the vertical structure of the planetary boundary layer. Two aerosol layer heights are retrieved by PathfinderTURB: the convective boundary layer (CBL) and the continuous aerosol layer (CAL). PathfinderTURB combines the strengths of gradient- and variance-based methods and addresses the layer attribution problem by adopting a geodesic approach. The algorithm has been applied to 1 year of data measured by two ceilometers of type CHM15k, one operated at the Aerological Observatory of Payerne (491 m a.s.l.) on the Swiss plateau and one at the Kleine Scheidegg (2061 m a.s.l.) in the Swiss Alps. The retrieval of the CBL has been validated at Payerne using two reference methods: (1) manual detections of the CBL height performed by human experts using the ceilometer backscatter data; (2) values of CBL heights calculated using the Richardson's method from co-located radio sounding data. We found average biases as small as 27 m (53 m) with respect to reference method 1 (method 2). Based on the excellent agreement between the two reference methods, PathfinderTURB has been applied to the ceilometer data at the mountainous site of the Kleine Scheidegg for the period September 2014 to November 2015. At this site, the CHM15k is operated in a tilted configuration at 71° zenith angle to probe the atmosphere next to the Sphinx Observatory (3580 m a.s.l.) on the Jungfraujoch (JFJ). The analysis of the retrieved layers led to the following results: the CAL reaches the JFJ 41 % of the time in summer and 21 % of the time in winter for a total of 97 days during the two seasons. The season-averaged daily cycles show that the CBL height reaches the JFJ only during short periods (4 % of the time), but on 20 individual days in summer and never during winter. During summer in particular, the CBL and the CAL modify the air sampled in situ at JFJ, resulting in an unequivocal dependence of the measured absorption coefficient on the height of both layers. This highlights the relevance of retrieving the height of CAL and CBL automatically at the JFJ.
NASA Astrophysics Data System (ADS)
Wilkins, Joseph L.
The influence of wildfire biomass burning and stratospheric air mass transport on tropospheric ozone (O3) concentrations in St. Louis during the SEAC4RS and SEACIONS-2013 measurement campaigns has been investigated. The Lagrangian particle dispersion model FLEXPART-WRF analysis reveals that 55% of ozonesonde profiles during SEACIONS were effected by biomass burning. Comparing ozonesonde profiles with numerical simulations show that as biomass burning plumes age there is O3 production aloft. A new plume injection height technique was developed based on the Naval Research Laboratory's (NRL) detection algorithm for pyro-convection. The NRL method identified 29 pyro-cumulonimbus events that occurred during the summer of 2013, of which 13 (44%) impacted the SEACIONS study area, and 4 (14%) impacted the St. Louis area. In this study, we investigate wildfire plume injection heights using model simulations and the FLAMBE emissions inventory using 2 different algorithms. In the first case, wildfire emissions are injected at the surface and allowed to mix within the boundary layer simulated by the meteorological model. In the second case, the injection height of wildfire emissions is determined by a guided deep-convective pyroCb run using the NRL detection algorithm. Results show that simulations using surface emissions were able to represent the transport of carbon monoxide plumes from wildfires when the plumes remained below 5 km or occurred during large convective systems, but that the surface effects were over predicted. The pyroCb cases simulated the long-range transport of elevated plumes above 5 km 68% of the time. In addition analysis of potential vorticity suggests that stratospheric intrusions or tropopause folds affected 13 days (48%) when there were sonde launches and 27 days (44%) during the entire study period. The largest impact occurred on September 12, 2013 when ozone-rich air impacted the nocturnal boundary layer. By analyzing ozonesonde profiles with meteorological transport models, we were able to identify biomass burning and stratospheric intrusions in St. Louis.
NASA Astrophysics Data System (ADS)
Aydogan, D.
2007-04-01
An image processing technique called the cellular neural network (CNN) approach is used in this study to locate geological features giving rise to gravity anomalies such as faults or the boundary of two geologic zones. CNN is a stochastic image processing technique based on template optimization using the neighborhood relationships of cells. These cells can be characterized by a functional block diagram that is typical of neural network theory. The functionality of CNN is described in its entirety by a number of small matrices (A, B and I) called the cloning template. CNN can also be considered to be a nonlinear convolution of these matrices. This template describes the strength of the nearest neighbor interconnections in the network. The recurrent perceptron learning algorithm (RPLA) is used in optimization of cloning template. The CNN and standard Canny algorithms were first tested on two sets of synthetic gravity data with the aim of checking the reliability of the proposed approach. The CNN method was compared with classical derivative techniques by applying the cross-correlation method (CC) to the same anomaly map as this latter approach can detect some features that are difficult to identify on the Bouguer anomaly maps. This approach was then applied to the Bouguer anomaly map of Biga and its surrounding area, in Turkey. Structural features in the area between Bandirma, Biga, Yenice and Gonen in the southwest Marmara region are investigated by applying the CNN and CC to the Bouguer anomaly map. Faults identified by these algorithms are generally in accordance with previously mapped surface faults. These examples show that the geologic boundaries can be detected from Bouguer anomaly maps using the cloning template approach. A visual evaluation of the outputs of the CNN and CC approaches is carried out, and the results are compared with each other. This approach provides quantitative solutions based on just a few assumptions, which makes the method more powerful than the classical methods.
Convergence issues in domain decomposition parallel computation of hovering rotor
NASA Astrophysics Data System (ADS)
Xiao, Zhongyun; Liu, Gang; Mou, Bin; Jiang, Xiong
2018-05-01
Implicit LU-SGS time integration algorithm has been widely used in parallel computation in spite of its lack of information from adjacent domains. When applied to parallel computation of hovering rotor flows in a rotating frame, it brings about convergence issues. To remedy the problem, three LU factorization-based implicit schemes (consisting of LU-SGS, DP-LUR and HLU-SGS) are investigated comparatively. A test case of pure grid rotation is designed to verify these algorithms, which show that LU-SGS algorithm introduces errors on boundary cells. When partition boundaries are circumferential, errors arise in proportion to grid speed, accumulating along with the rotation, and leading to computational failure in the end. Meanwhile, DP-LUR and HLU-SGS methods show good convergence owing to boundary treatment which are desirable in domain decomposition parallel computations.
A novel method for surface defect inspection of optic cable with short-wave infrared illuminance
NASA Astrophysics Data System (ADS)
Chen, Xiaohong; Liu, Ning; You, Bo; Xiao, Bin
2016-07-01
Intelligent on-line detection of cable quality is a crucial issue in optic cable factory, and defects on the surface of optic cable can dramatically depress cable grade. Manual inspection in optic cable quality cannot catch up with the development of optic cable industry due to its low detection efficiency and huge human cost. Therefore, real-time is highly demanded by industry in order to replace the subjective and repetitive process of manual inspection. For this reason, automatic cable defect inspection has been a trend. In this paper, a novel method for surface defect inspection of optic cable with short-wave infrared illuminance is presented. The special condition of short-wave infrared cannot only provide illumination compensation for the weak illumination environment, but also can avoid the problem of exposure when using visible light illuminance, which affects the accuracy of inspection algorithm. A series of image processing algorithms are set up to analyze cable image for the verification of real-time and veracity of the detection method. Unlike some existing detection algorithms which concentrate on the characteristics of defects with an active search way, the proposed method removes the non-defective areas of the image passively at the same time of image processing, which reduces a large amount of computation. OTSU algorithm is used to convert the gray image to the binary image. Furthermore, a threshold window is designed to eliminate the fake defects, and the threshold represents the considered minimum size of defects ε . Besides, a new regional suppression method is proposed to deal with the edge burrs of the cable, which shows the superior performance compared with that of Open-Close operation of mathematical morphological in the boundary processing. Experimental results of 10,000 samples show that the rates of miss detection and false detection are 2.35% and 0.78% respectively when ε equals to 0.5 mm, and the average processing period of one frame image is 2.39 ms. All the improvements have been verified in the paper to show the ability of our inspection method for optic cable.
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that have resulted from this work. A review of computational aeroacoustics has recently been given by Lele.
A three-dimensional polyhedral unit model for grain boundary structure in fcc metals
NASA Astrophysics Data System (ADS)
Banadaki, Arash Dehghan; Patala, Srikanth
2017-03-01
One of the biggest challenges in developing truly bottom-up models for the performance of polycrystalline materials is the lack of robust quantitative structure-property relationships for interfaces. As a first step in analyzing such relationships, we present a polyhedral unit model to classify the geometrical nature of atomic packing along grain boundaries. While the atomic structure in disordered systems has been a topic of interest for many decades, geometrical analyses of grain boundaries has proven to be particularly challenging because of the wide range of structures that are possible depending on the underlying macroscopic crystallographic character. In this article, we propose an algorithm that can partition the atomic structure into a connected array of three-dimensional polyhedra, and thus, present a three-dimensional polyhedral unit model for grain boundaries. A point-pattern matching algorithm is also provided for quantifying the distortions of the observed grain boundary polyhedral units. The polyhedral unit model is robust enough to capture the structure of high-Σ, mixed character interfaces and, hence, provides a geometric tool for comparing grain boundary structures across the five-parameter crystallographic phase-space. Since the obtained polyhedral units circumscribe the voids present in the structure, such a description provides valuable information concerning segregation sites within the grain boundary. We anticipate that this technique will serve as a powerful tool in the analysis of grain boundary structure. The polyhedral unit model is also applicable to a wide array of material systems as the proposed algorithm is not limited by the underlying lattice structure.
Phase separation in the six-vertex model with a variety of boundary conditions
NASA Astrophysics Data System (ADS)
Lyberg, I.; Korepin, V.; Ribeiro, G. A. P.; Viti, J.
2018-05-01
We present numerical results for the six-vertex model with a variety of boundary conditions. Adapting an algorithm for domain wall boundary conditions, proposed in the work of Allison and Reshetikhin [Ann. Inst. Fourier 55(6), 1847-1869 (2005)], we examine some modifications of these boundary conditions. To be precise, we discuss partial domain wall boundary conditions, reflecting ends, and half turn boundary conditions (domain wall boundary conditions with half turn symmetry). Dedicated to the memory of Ludwig Faddeev
NASA Astrophysics Data System (ADS)
Wang, Qiongjie; Yan, Li
2016-06-01
With the rapid development of sensor networks and earth observation technology, a large quantity of high resolution remote sensing data is available. However, the influence of shadow has become increasingly greater due to the higher resolution shows more complex and detailed land cover, especially under the shadow. Shadow areas usually have lower intensity and fuzzy boundary, which make the images hard to interpret automatically. In this paper, a simple and effective shadow (including soft shadow) detection and compensation method is proposed based on normal data, Digital Elevation Model (DEM) and sun position. First, we use high accuracy DEM and sun position to rebuild the geometric relationship between surface and sun at the time the image shoot and get the hard shadow boundary and sky view factor (SVF) of each pixel. Anisotropic scattering assumption is accepted to determine the soft shadow factor mainly affected by diffuse radiation. Finally, an easy radiation transmission model is used to compensate the shadow area. Compared with the spectral detection method, our detection method has strict theoretical basis, reliable compensation result and minor affected by the image quality. The compensation strategy can effectively improve the radiation intensity of shadow area, reduce the information loss brought by shadow and improve the robustness and efficiency of the classification algorithms.
NASA Astrophysics Data System (ADS)
Fatehi, Moslem; Asadi, Hooshang H.
2017-04-01
In this study, the application of a transductive support vector machine (TSVM), an innovative semi-supervised learning algorithm, has been proposed for mapping the potential drill targets at a detailed exploration stage. The semi-supervised learning method is a hybrid of supervised and unsupervised learning approach that simultaneously uses both training and non-training data to design a classifier. By using the TSVM algorithm, exploration layers at the Dalli porphyry Cu-Au deposit in the central Iran were integrated to locate the boundary of the Cu-Au mineralization for further drilling. By applying this algorithm on the non-training (unlabeled) and limited training (labeled) Dalli exploration data, the study area was classified in two domains of Cu-Au ore and waste. Then, the results were validated by the earlier block models created, using the available borehole and trench data. In addition to TSVM, the support vector machine (SVM) algorithm was also implemented on the study area for comparison. Thirty percent of the labeled exploration data was used to evaluate the performance of these two algorithms. The results revealed 87 percent correct recognition accuracy for the TSVM algorithm and 82 percent for the SVM algorithm. The deepest inclined borehole, recently drilled in the western part of the Dalli deposit, indicated that the boundary of Cu-Au mineralization, as identified by the TSVM algorithm, was only 15 m off from the actual boundary intersected by this borehole. According to the results of the TSVM algorithm, six new boreholes were suggested for further drilling at the Dalli deposit. This study showed that the TSVM algorithm could be a useful tool for enhancing the mineralization zones and consequently, ensuring a more accurate drill hole planning.
Efficient boundary hunting via vector quantization
NASA Astrophysics Data System (ADS)
Diamantini, Claudia; Panti, Maurizio
2001-03-01
A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.
LayTracks3D: A new approach for meshing general solids using medial axis transform
Quadros, William Roshan
2015-08-22
This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to themore » MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.« less
NASA Astrophysics Data System (ADS)
Lin, Zhi; Zhang, Qinghai
2017-09-01
We propose high-order finite-volume schemes for numerically solving the steady-state advection-diffusion equation with nonlinear Robin boundary conditions. Although the original motivation comes from a mathematical model of blood clotting, the nonlinear boundary conditions may also apply to other scientific problems. The main contribution of this work is a generic algorithm for generating third-order, fourth-order, and even higher-order explicit ghost-filling formulas to enforce nonlinear Robin boundary conditions in multiple dimensions. Under the framework of finite volume methods, this appears to be the first algorithm of its kind. Numerical experiments on boundary value problems show that the proposed fourth-order formula can be much more accurate and efficient than a simple second-order formula. Furthermore, the proposed ghost-filling formulas may also be useful for solving other partial differential equations.
A Rotational Pressure-Correction Scheme for Incompressible Two-Phase Flows with Open Boundaries
Dong, S.; Wang, X.
2016-01-01
Two-phase outflows refer to situations where the interface formed between two immiscible incompressible fluids passes through open portions of the domain boundary. We present several new forms of open boundary conditions for two-phase outflow simulations within the phase field framework, as well as a rotational pressure correction based algorithm for numerically treating these open boundary conditions. Our algorithm gives rise to linear algebraic systems for the velocity and the pressure that involve only constant and time-independent coefficient matrices after discretization, despite the variable density and variable viscosity of the two-phase mixture. By comparing simulation results with theory and the experimental data, we show that the method produces physically accurate results. We also present numerical experiments to demonstrate the long-term stability of the method in situations where large density contrast, large viscosity contrast, and backflows occur at the two-phase open boundaries. PMID:27163909
NASA Astrophysics Data System (ADS)
Haapanala, S.; Rinne, J.; Hakola, H.; Hellén, H.; Laakso, L.; Lihavainen, H.; Janson, R.; O'Dowd, C.; Kulmala, M.
2007-04-01
Boundary layer concentrations of several volatile organic compounds (VOC) were measured during two campaigns in springs of 2003 and 2006. The measurements were conducted over boreal landscapes near SMEAR II measurement station in Hyytiälä, Southern Finland. In 2003 the measuremens were performed using a light aircraft and in 2006 using a hot air balloon. Isoprene concentrations were low, usually below detection limit. This can be explained by low biogenic production due to cold weather, phenological stage of the isoprene emitting plants, and snow cover. Monoterpenes were observed frequently. The average total monoterpene concentration in the boundary layer was 33 pptv. Many anthropogenic compounds such as benzene, xylene and toluene, were observed in high amounts. Ecosystem scale surface emissions were estimated using a simple mixed box budget methodology. Total monoterpene emissions varied up to 80 μg m-2 h-1, α-pinene contributing typically more than two thirds of that. These emissions were somewhat higher that those calculated using emission algorithm. The highest emissions of anthropogenic compounds were those of p/m xylene.
NASA Astrophysics Data System (ADS)
Nanayakkara, Nuwan D.; Samarabandu, Jagath; Fenster, Aaron
2006-04-01
Estimation of prostate location and volume is essential in determining a dose plan for ultrasound-guided brachytherapy, a common prostate cancer treatment. However, manual segmentation is difficult, time consuming and prone to variability. In this paper, we present a semi-automatic discrete dynamic contour (DDC) model based image segmentation algorithm, which effectively combines a multi-resolution model refinement procedure together with the domain knowledge of the image class. The segmentation begins on a low-resolution image by defining a closed DDC model by the user. This contour model is then deformed progressively towards higher resolution images. We use a combination of a domain knowledge based fuzzy inference system (FIS) and a set of adaptive region based operators to enhance the edges of interest and to govern the model refinement using a DDC model. The automatic vertex relocation process, embedded into the algorithm, relocates deviated contour points back onto the actual prostate boundary, eliminating the need of user interaction after initialization. The accuracy of the prostate boundary produced by the proposed algorithm was evaluated by comparing it with a manually outlined contour by an expert observer. We used this algorithm to segment the prostate boundary in 114 2D transrectal ultrasound (TRUS) images of six patients scheduled for brachytherapy. The mean distance between the contours produced by the proposed algorithm and the manual outlines was 2.70 ± 0.51 pixels (0.54 ± 0.10 mm). We also showed that the algorithm is insensitive to variations of the initial model and parameter values, thus increasing the accuracy and reproducibility of the resulting boundaries in the presence of noise and artefacts.
A novel approach for individual tree crown delineation using lidar data
NASA Astrophysics Data System (ADS)
Liu, Tao
Individual tree crown delineation (ITCD) is an important technique to support precision forestry. ITCD is particularly difficult for deciduous forests where the existence of multiple branches can lead to false tree top detection. This thesis focused on developing a new ITCD model, which consists of two components: (1) boundary refinement using a novel algorithm called Fishing Net Dragging (FiND), and (2) segment merging using boundary classification. The proposed ITCD model was tested in both deciduous and mixed forests, attaining an overall accuracy of 74% and 78%, respectively. This compared favorably to an ITCD method commonly cited in the literature, which attained 41% and 51% on the same plots. To facilitate comparison of research in the ITCD community, this thesis also developed a new accuracy assessment scheme for ITCD. This new accuracy assessment is easy to interpret and convenient to implement while comprehensively evaluating ITCD accuracy.
A multi-block adaptive solving technique based on lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Zhang, Yang; Xie, Jiahua; Li, Xiaoyue; Ma, Zhenghai; Zou, Jianfeng; Zheng, Yao
2018-05-01
In this paper, a CFD parallel adaptive algorithm is self-developed by combining the multi-block Lattice Boltzmann Method (LBM) with Adaptive Mesh Refinement (AMR). The mesh refinement criterion of this algorithm is based on the density, velocity and vortices of the flow field. The refined grid boundary is obtained by extending outward half a ghost cell from the coarse grid boundary, which makes the adaptive mesh more compact and the boundary treatment more convenient. Two numerical examples of the backward step flow separation and the unsteady flow around circular cylinder demonstrate the vortex structure of the cold flow field accurately and specifically.
NASA Astrophysics Data System (ADS)
Bulgakov, V. K.; Strigunov, V. V.
2009-05-01
The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.
Implementation of the block-Krylov boundary flexibility method of component synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.
1993-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.
CRISPRDetect: A flexible algorithm to define CRISPR arrays.
Biswas, Ambarish; Staals, Raymond H J; Morales, Sergio E; Fineran, Peter C; Brown, Chris M
2016-05-17
CRISPR (clustered regularly interspaced short palindromic repeats) RNAs provide the specificity for noncoding RNA-guided adaptive immune defence systems in prokaryotes. CRISPR arrays consist of repeat sequences separated by specific spacer sequences. CRISPR arrays have previously been identified in a large proportion of prokaryotic genomes. However, currently available detection algorithms do not utilise recently discovered features regarding CRISPR loci. We have developed a new approach to automatically detect, predict and interactively refine CRISPR arrays. It is available as a web program and command line from bioanalysis.otago.ac.nz/CRISPRDetect. CRISPRDetect discovers putative arrays, extends the array by detecting additional variant repeats, corrects the direction of arrays, refines the repeat/spacer boundaries, and annotates different types of sequence variations (e.g. insertion/deletion) in near identical repeats. Due to these features, CRISPRDetect has significant advantages when compared to existing identification tools. As well as further support for small medium and large repeats, CRISPRDetect identified a class of arrays with 'extra-large' repeats in bacteria (repeats 44-50 nt). The CRISPRDetect output is integrated with other analysis tools. Notably, the predicted spacers can be directly utilised by CRISPRTarget to predict targets. CRISPRDetect enables more accurate detection of arrays and spacers and its gff output is suitable for inclusion in genome annotation pipelines and visualisation. It has been used to analyse all complete bacterial and archaeal reference genomes.
Stewart, C M; Newlands, S D; Perachio, A A
2004-12-01
Rapid and accurate discrimination of single units from extracellular recordings is a fundamental process for the analysis and interpretation of electrophysiological recordings. We present an algorithm that performs detection, characterization, discrimination, and analysis of action potentials from extracellular recording sessions. The program was entirely written in LabVIEW (National Instruments), and requires no external hardware devices or a priori information about action potential shapes. Waveform events are detected by scanning the digital record for voltages that exceed a user-adjustable trigger. Detected events are characterized to determine nine different time and voltage levels for each event. Various algebraic combinations of these waveform features are used as axis choices for 2-D Cartesian plots of events. The user selects axis choices that generate distinct clusters. Multiple clusters may be defined as action potentials by manually generating boundaries of arbitrary shape. Events defined as action potentials are validated by visual inspection of overlain waveforms. Stimulus-response relationships may be identified by selecting any recorded channel for comparison to continuous and average cycle histograms of binned unit data. The algorithm includes novel aspects of feature analysis and acquisition, including higher acquisition rates for electrophysiological data compared to other channels. The program confirms that electrophysiological data may be discriminated with high-speed and efficiency using algebraic combinations of waveform features derived from high-speed digital records.
Application of the perfectly matched layer in 2.5D marine controlled-source electromagnetic modeling
NASA Astrophysics Data System (ADS)
Li, Gang; Han, Bo
2017-09-01
For the traditional framework of EM modeling algorithms, the Dirichlet boundary is usually used which assumes the field values are zero at the boundaries. This crude condition requires that the boundaries should be sufficiently far away from the area of interest. Although cell sizes could become larger toward the boundaries as electromagnetic wave is propagated diffusively, a large modeling area may still be necessary to mitigate the boundary artifacts. In this paper, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 2.5D frequency-domain marine controlled-source electromagnetic (CSEM) field modeling. By using this PML boundary, one can restrict the modeling area of interest to the target region. Only a few absorbing layers surrounding the computational area can effectively depress the artificial boundary effect without losing the numerical accuracy. A 2.5D marine CSEM modeling scheme with the CFS-PML is developed by using the staggered finite-difference discretization. This modeling algorithm using the CFS-PML is of high accuracy, and shows advantages in computational time and memory saving than that using the Dirichlet boundary. For 3D problem, this computation time and memory saving should be more significant.
Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza
2015-01-01
This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.
Numerical Study of Boundary Layer Interaction with Shocks: Method Improvement and Test Computation
NASA Technical Reports Server (NTRS)
Adams, N. A.
1995-01-01
The objective is the development of a high-order and high-resolution method for the direct numerical simulation of shock turbulent-boundary-layer interaction. Details concerning the spatial discretization of the convective terms can be found in Adams and Shariff (1995). The computer code based on this method as introduced in Adams (1994) was formulated in Cartesian coordinates and thus has been limited to simple rectangular domains. For more general two-dimensional geometries, as a compression corner, an extension to generalized coordinates is necessary. To keep the requirements or limitations for grid generation low, the extended formulation should allow for non-orthogonal grids. Still, for simplicity and cost efficiency, periodicity can be assumed in one cross-flow direction. For easy vectorization, the compact-ENO coupling algorithm as used in Adams (1994) treated whole planes normal to the derivative direction with the ENO scheme whenever at least one point of this plane satisfied the detection criterion. This is apparently too restrictive for more general geometries and more complex shock patterns. Here we introduce a localized compact-ENO coupling algorithm, which is efficient as long as the overall number of grid points treated by the ENO scheme is small compared to the total number of grid points. Validation and test computations with the final code are performed to assess the efficiency and suitability of the computer code for the problems of interest. We define a set of parameters where a direct numerical simulation of a turbulent boundary layer along a compression corner with reasonably fine resolution is affordable.
Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza
2015-01-01
This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction. PMID:26284170
Lu, Chao; Chelikani, Sudhakar; Jaffray, David A.; Milosevic, Michael F.; Staib, Lawrence H.; Duncan, James S.
2013-01-01
External beam radiation therapy (EBRT) for the treatment of cancer enables accurate placement of radiation dose on the cancerous region. However, the deformation of soft tissue during the course of treatment, such as in cervical cancer, presents significant challenges for the delineation of the target volume and other structures of interest. Furthermore, the presence and regression of pathologies such as tumors may violate registration constraints and cause registration errors. In this paper, automatic segmentation, nonrigid registration and tumor detection in cervical magnetic resonance (MR) data are addressed simultaneously using a unified Bayesian framework. The proposed novel method can generate a tumor probability map while progressively identifying the boundary of an organ of interest based on the achieved nonrigid transformation. The method is able to handle the challenges of significant tumor regression and its effect on surrounding tissues. The new method was compared to various currently existing algorithms on a set of 36 MR data from six patients, each patient has six T2-weighted MR cervical images. The results show that the proposed approach achieves an accuracy comparable to manual segmentation and it significantly outperforms the existing registration algorithms. In addition, the tumor detection result generated by the proposed method has a high agreement with manual delineation by a qualified clinician. PMID:22328178
A complete system for 3D reconstruction of roots for phenotypic analysis.
Kumar, Pankaj; Cai, Jinhai; Miklavcic, Stanley J
2015-01-01
Here we present a complete system for 3D reconstruction of roots grown in a transparent gel medium or washed and suspended in water. The system is capable of being fully automated as it is self calibrating. The system starts with detection of root tips in root images from an image sequence generated by a turntable motion. Root tips are detected using the statistics of Zernike moments on image patches centred on high curvature points on root boundary and Bayes classification rule. The detected root tips are tracked in the image sequence using a multi-target tracking algorithm. Conics are fitted to the root tip trajectories using a novel ellipse fitting algorithm which weighs the data points by its eccentricity. The conics projected from the circular trajectory have a complex conjugate intersection which are image of the circular points. Circular points constraint the image of the absolute conics which are directly related to the internal parameters of the camera. The pose of the camera is computed from the image of the rotation axis and the horizon. The silhouettes of the roots and camera parameters are used to reconstruction the 3D voxel model of the roots. We show the results of real 3D reconstruction of roots which are detailed and realistic for phenotypic analysis.
Craniofacial Reconstruction Using Rational Cubic Ball Curves
Majeed, Abdul; Mt Piah, Abd Rahni; Gobithaasan, R. U.; Yahya, Zainor Ridzuan
2015-01-01
This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application. PMID:25880632
Adaptive triangular mesh generation
NASA Technical Reports Server (NTRS)
Erlebacher, G.; Eiseman, P. R.
1984-01-01
A general adaptive grid algorithm is developed on triangular grids. The adaptivity is provided by a combination of node addition, dynamic node connectivity and a simple node movement strategy. While the local restructuring process and the node addition mechanism take place in the physical plane, the nodes are displaced on a monitor surface, constructed from the salient features of the physical problem. An approximation to mean curvature detects changes in the direction of the monitor surface, and provides the pulling force on the nodes. Solutions to the axisymmetric Grad-Shafranov equation demonstrate the capturing, by triangles, of the plasma-vacuum interface in a free-boundary equilibrium configuration.
Iris recognition using possibilistic fuzzy matching on local features.
Tsai, Chung-Chih; Lin, Heng-Yi; Taur, Jinshiuh; Tao, Chin-Wang
2012-02-01
In this paper, we propose a novel possibilistic fuzzy matching strategy with invariant properties, which can provide a robust and effective matching scheme for two sets of iris feature points. In addition, the nonlinear normalization model is adopted to provide more accurate position before matching. Moreover, an effective iris segmentation method is proposed to refine the detected inner and outer boundaries to smooth curves. For feature extraction, the Gabor filters are adopted to detect the local feature points from the segmented iris image in the Cartesian coordinate system and to generate a rotation-invariant descriptor for each detected point. After that, the proposed matching algorithm is used to compute a similarity score for two sets of feature points from a pair of iris images. The experimental results show that the performance of our system is better than those of the systems based on the local features and is comparable to those of the typical systems.
Scalability problems of simple genetic algorithms.
Thierens, D
1999-01-01
Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.
A dissipative particle dynamics method for arbitrarily complex geometries
NASA Astrophysics Data System (ADS)
Li, Zhen; Bian, Xin; Tang, Yu-Hang; Karniadakis, George Em
2018-02-01
Dissipative particle dynamics (DPD) is an effective Lagrangian method for modeling complex fluids in the mesoscale regime but so far it has been limited to relatively simple geometries. Here, we formulate a local detection method for DPD involving arbitrarily shaped geometric three-dimensional domains. By introducing an indicator variable of boundary volume fraction (BVF) for each fluid particle, the boundary of arbitrary-shape objects is detected on-the-fly for the moving fluid particles using only the local particle configuration. Therefore, this approach eliminates the need of an analytical description of the boundary and geometry of objects in DPD simulations and makes it possible to load the geometry of a system directly from experimental images or computer-aided designs/drawings. More specifically, the BVF of a fluid particle is defined by the weighted summation over its neighboring particles within a cutoff distance. Wall penetration is inferred from the value of the BVF and prevented by a predictor-corrector algorithm. The no-slip boundary condition is achieved by employing effective dissipative coefficients for liquid-solid interactions. Quantitative evaluations of the new method are performed for the plane Poiseuille flow, the plane Couette flow and the Wannier flow in a cylindrical domain and compared with their corresponding analytical solutions and (high-order) spectral element solution of the Navier-Stokes equations. We verify that the proposed method yields correct no-slip boundary conditions for velocity and generates negligible fluctuations of density and temperature in the vicinity of the wall surface. Moreover, we construct a very complex 3D geometry - the "Brown Pacman" microfluidic device - to explicitly demonstrate how to construct a DPD system with complex geometry directly from loading a graphical image. Subsequently, we simulate the flow of a surfactant solution through this complex microfluidic device using the new method. Its effectiveness is demonstrated by examining the rich dynamics of surfactant micelles, which are flowing around multiple small cylinders and stenotic regions in the microfluidic device without wall penetration. In addition to stationary arbitrary-shape objects, the new method is particularly useful for problems involving moving and deformable boundaries, because it only uses local information of neighboring particles and satisfies the desired boundary conditions on-the-fly.
Multiple crack detection in 3D using a stable XFEM and global optimization
NASA Astrophysics Data System (ADS)
Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.
2018-02-01
A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.
NASA Astrophysics Data System (ADS)
Qin, Zhuanping; Ma, Wenjuan; Ren, Shuyan; Geng, Liqing; Li, Jing; Yang, Ying; Qin, Yingmei
2017-02-01
Endoscopic DOT has the potential to apply to cancer-related imaging in tubular organs. Although the DOT has relatively large tissue penetration depth, the endoscopic DOT is limited by the narrow space of the internal tubular tissue, so as to the relatively small penetration depth. Because some adenocarcinomas including cervical adenocarcinoma are located in deep canal, it is necessary to improve the imaging resolution under the limited measurement condition. To improve the resolution, a new FOCUSS algorithm along with the image reconstruction algorithm based on the effective detection range (EDR) is developed. This algorithm is based on the region of interest (ROI) to reduce the dimensions of the matrix. The shrinking method cuts down the computation burden. To reduce the computational complexity, double conjugate gradient method is used in the matrix inversion. For a typical inner size and optical properties of the cervix-like tubular tissue, reconstructed images from the simulation data demonstrate that the proposed method achieves equivalent image quality to that obtained from the method based on EDR when the target is close the inner boundary of the model, and with higher spatial resolution and quantitative ratio when the targets are far from the inner boundary of the model. The quantitative ratio of reconstructed absorption and reduced scattering coefficient can be up to 70% and 80% under 5mm depth, respectively. Furthermore, the two close targets with different depths can be separated from each other. The proposed method will be useful to the development of endoscopic DOT technologies in tubular organs.
NASA Astrophysics Data System (ADS)
Ward, W. O. C.; Wilkinson, P. B.; Chambers, J. E.; Oxby, L. S.; Bai, L.
2014-04-01
A novel method for the effective identification of bedrock subsurface elevation from electrical resistivity tomography images is described. Identifying subsurface boundaries in the topographic data can be difficult due to smoothness constraints used in inversion, so a statistical population-based approach is used that extends previous work in calculating isoresistivity surfaces. The analysis framework involves a procedure for guiding a clustering approach based on the fuzzy c-means algorithm. An approximation of resistivity distributions, found using kernel density estimation, was utilized as a means of guiding the cluster centroids used to classify data. A fuzzy method was chosen over hard clustering due to uncertainty in hard edges in the topography data, and a measure of clustering uncertainty was identified based on the reciprocal of cluster membership. The algorithm was validated using a direct comparison of known observed bedrock depths at two 3-D survey sites, using real-time GPS information of exposed bedrock by quarrying on one site, and borehole logs at the other. Results show similarly accurate detection as a leading isosurface estimation method, and the proposed algorithm requires significantly less user input and prior site knowledge. Furthermore, the method is effectively dimension-independent and will scale to data of increased spatial dimensions without a significant effect on the runtime. A discussion on the results by automated versus supervised analysis is also presented.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Paramagul, Chintana
2004-05-01
Automated registration of multiple mammograms for CAD depends on accurate nipple identification. We developed two new image analysis techniques based on geometric and texture convergence analyses to improve the performance of our previously developed nipple identification method. A gradient-based algorithm is used to automatically track the breast boundary. The nipple search region along the boundary is then defined by geometric convergence analysis of the breast shape. Three nipple candidates are identified by detecting the changes along the gray level profiles inside and outside the boundary and the changes in the boundary direction. A texture orientation-field analysis method is developed to estimate the fourth nipple candidate based on the convergence of the tissue texture pattern towards the nipple. The final nipple location is determined from the four nipple candidates by a confidence analysis. Our training and test data sets consisted of 419 and 368 randomly selected mammograms, respectively. The nipple location identified on each image by an experienced radiologist was used as the ground truth. For 118 of the training and 70 of the test images, the radiologist could not positively identify the nipple, but provided an estimate of its location. These were referred to as invisible nipple images. In the training data set, 89.37% (269/301) of the visible nipples and 81.36% (96/118) of the invisible nipples could be detected within 1 cm of the truth. In the test data set, 92.28% (275/298) of the visible nipples and 67.14% (47/70) of the invisible nipples were identified within 1 cm of the truth. In comparison, our previous nipple identification method without using the two convergence analysis techniques detected 82.39% (248/301), 77.12% (91/118), 89.93% (268/298) and 54.29% (38/70) of the nipples within 1 cm of the truth for the visible and invisible nipples in the training and test sets, respectively. The results indicate that the nipple on mammograms can be detected accurately. This will be an important step towards automatic multiple image analysis for CAD techniques.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
Qin, Junping; Sun, Shiwen; Deng, Qingxu; Liu, Limin; Tian, Yonghong
2017-06-02
Object tracking and detection is one of the most significant research areas for wireless sensor networks. Existing indoor trajectory tracking schemes in wireless sensor networks are based on continuous localization and moving object data mining. Indoor trajectory tracking based on the received signal strength indicator ( RSSI ) has received increased attention because it has low cost and requires no special infrastructure. However, RSSI tracking introduces uncertainty because of the inaccuracies of measurement instruments and the irregularities (unstable, multipath, diffraction) of wireless signal transmissions in indoor environments. Heuristic information includes some key factors for trajectory tracking procedures. This paper proposes a novel trajectory tracking scheme based on Delaunay triangulation and heuristic information (TTDH). In this scheme, the entire field is divided into a series of triangular regions. The common side of adjacent triangular regions is regarded as a regional boundary. Our scheme detects heuristic information related to a moving object's trajectory, including boundaries and triangular regions. Then, the trajectory is formed by means of a dynamic time-warping position-fingerprint-matching algorithm with heuristic information constraints. Field experiments show that the average error distance of our scheme is less than 1.5 m, and that error does not accumulate among the regions.
Quantitative Detection of Cracks in Steel Using Eddy Current Pulsed Thermography.
Shi, Zhanqun; Xu, Xiaoyu; Ma, Jiaojiao; Zhen, Dong; Zhang, Hao
2018-04-02
Small cracks are common defects in steel and often lead to catastrophic accidents in industrial applications. Various nondestructive testing methods have been investigated for crack detection; however, most current methods focus on qualitative crack identification and image processing. In this study, eddy current pulsed thermography (ECPT) was applied for quantitative crack detection based on derivative analysis of temperature variation. The effects of the incentive parameters on the temperature variation were analyzed in the simulation study. The crack profile and position are identified in the thermal image based on the Canny edge detection algorithm. Then, one or more trajectories are determined through the crack profile in order to determine the crack boundary through its temperature distribution. The slope curve along the trajectory is obtained. Finally, quantitative analysis of the crack sizes was performed by analyzing the features of the slope curves. The experimental verification showed that the crack sizes could be quantitatively detected with errors of less than 1%. Therefore, the proposed ECPT method was demonstrated to be a feasible and effective nondestructive approach for quantitative crack detection.
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.
Pace, Danielle F.; Aylward, Stephen R.; Niethammer, Marc
2014-01-01
We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall. PMID:23899632
Pace, Danielle F; Aylward, Stephen R; Niethammer, Marc
2013-11-01
We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall.
Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images.
Tian, Jing; Marziliano, Pina; Baskaran, Mani; Tun, Tin Aung; Aung, Tin
2013-03-01
Enhanced Depth Imaging (EDI) optical coherence tomography (OCT) provides high-definition cross-sectional images of the choroid in vivo, and hence is used in many clinical studies. However, the quantification of the choroid depends on the manual labelings of two boundaries, Bruch's membrane and the choroidal-scleral interface. This labeling process is tedious and subjective of inter-observer differences, hence, automatic segmentation of the choroid layer is highly desirable. In this paper, we present a fast and accurate algorithm that could segment the choroid automatically. Bruch's membrane is detected by searching the pixel with the biggest gradient value above the retinal pigment epithelium (RPE) and the choroidal-scleral interface is delineated by finding the shortest path of the graph formed by valley pixels using Dijkstra's algorithm. The experiments comparing automatic segmentation results with the manual labelings are conducted on 45 EDI-OCT images and the average of Dice's Coefficient is 90.5%, which shows good consistency of the algorithm with the manual labelings. The processing time for each image is about 1.25 seconds.
Image segmentation and 3D visualization for MRI mammography
NASA Astrophysics Data System (ADS)
Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.
2002-05-01
MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.
NASA Astrophysics Data System (ADS)
Zhou, Guofeng; Wang, Limin; Wang, Xiaowei; Ge, Wei
2011-12-01
Many investigators have coupled the Lees-Edwards boundary conditions (LEBCs) and suspension methods in the framework of the lattice Boltzmann method to study the pure bulk properties of particle-fluid suspensions. However, these suspension methods are all link-based and are more or less exposed to the disadvantages of violating Galilean invariance. In this paper, we have coupled LEBCs with a node-based suspension method, which is demonstrated to be Galilean invariant in benchmark simulations. We use the coupled algorithm to predict the viscosity of a particle-fluid suspension at very low Reynolds number, and the simulation results are in good agreement with the semiempirical Krieger-Dougherty formula.
Optimizing Robinson Operator with Ant Colony Optimization As a Digital Image Edge Detection Method
NASA Astrophysics Data System (ADS)
Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin
2017-12-01
Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic method for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection method with the approach graph with Ant Colony Optimization algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do optimization of operator Robinson with Ant Colony Optimization then compare the output and generated the inferred extent of Ant Colony Optimization can improve result of edge detection that has not been optimized and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony Optimization method produces images with a more assertive and thick edge. Ant Colony Optimization method is able to be used as a method for optimizing operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.
Ou, Jao J.; Ong, Rowena E.; Miga, Michael I.
2013-01-01
Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical input to the algorithm and are often determined by time-consuming point correspondence methods requiring manual user input. This study presents a novel method of automatically generating boundary conditions by nonrigidly registering two image sets with a demons diffusion-based registration algorithm. The use of this method was successfully performed in silico using magnetic resonance and X-ray-computed tomography image data with known boundary conditions. These preliminary results produced boundary conditions with an accuracy of up to 80% compared to the known conditions. Demons-based boundary conditions were utilized within a 3-D MIE reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Two phantom experiments were then conducted to further test the accuracy of the demons boundary conditions and the MIE reconstruction arising from the use of these conditions. Preliminary results show a reasonable characterization of the material properties on this first attempt and a significant improvement in the automation level and viability of the method. PMID:21690002
Pheiffer, Thomas S; Ou, Jao J; Ong, Rowena E; Miga, Michael I
2011-09-01
Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical input to the algorithm and are often determined by time-consuming point correspondence methods requiring manual user input. This study presents a novel method of automatically generating boundary conditions by nonrigidly registering two image sets with a demons diffusion-based registration algorithm. The use of this method was successfully performed in silico using magnetic resonance and X-ray-computed tomography image data with known boundary conditions. These preliminary results produced boundary conditions with an accuracy of up to 80% compared to the known conditions. Demons-based boundary conditions were utilized within a 3-D MIE reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Two phantom experiments were then conducted to further test the accuracy of the demons boundary conditions and the MIE reconstruction arising from the use of these conditions. Preliminary results show a reasonable characterization of the material properties on this first attempt and a significant improvement in the automation level and viability of the method.
2017-09-01
VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY
A method of boundary equations for unsteady hyperbolic problems in 3D
NASA Astrophysics Data System (ADS)
Petropavlovsky, S.; Tsynkov, S.; Turkel, E.
2018-07-01
We consider interior and exterior initial boundary value problems for the three-dimensional wave (d'Alembert) equation. First, we reduce a given problem to an equivalent operator equation with respect to unknown sources defined only at the boundary of the original domain. In doing so, the Huygens' principle enables us to obtain the operator equation in a form that involves only finite and non-increasing pre-history of the solution in time. Next, we discretize the resulting boundary equation and solve it efficiently by the method of difference potentials (MDP). The overall numerical algorithm handles boundaries of general shape using regular structured grids with no deterioration of accuracy. For long simulation times it offers sub-linear complexity with respect to the grid dimension, i.e., is asymptotically cheaper than the cost of a typical explicit scheme. In addition, our algorithm allows one to share the computational cost between multiple similar problems. On multi-processor (multi-core) platforms, it benefits from what can be considered an effective parallelization in time.
NASA Astrophysics Data System (ADS)
Sheloput, Tatiana; Agoshkov, Valery
2017-04-01
The problem of modeling water areas with `liquid' (open) lateral boundaries is discussed. There are different known methods dealing with open boundaries in limited-area models, and one of the most efficient is data assimilation. Although this method is popular, there are not so many articles concerning its implementation for recovering boundary functions. However, the problem of specifying boundary conditions at the open boundary of a limited area is still actual and important. The mathematical model of the Baltic Sea circulation, developed in INM RAS, is considered. It is based on the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations. The splitting method that is used for time approximation in the model allows to consider the data assimilation problem as a sequence of linear problems. One of such `simple' temperature (salinity) assimilation problem is investigated in the study. Using well known techniques of study and solution of inverse problems and optimal control problems [1], we propose an iterative solution algorithm and we obtain conditions for existence of the solution, for unique and dense solvability of the problem and for convergence of the iterative algorithm. The investigation shows that if observations satisfy certain conditions, the proposed algorithm converges to the solution of the boundary control problem. Particularly, it converges when observational data are given on the `liquid' boundary [2]. Theoretical results are confirmed by the results of numerical experiments. The numerical algorithm was implemented to water area of the Baltic Sea. Two numerical experiments were carried out in the Gulf of Finland: one with the application of the assimilation procedure and the other without. The analyses have shown that the surface temperature field in the first experiment is close to the observed one, while the result of the second experiment misfits. Number of iterations depends on the regularisation parameter, but generally the algorithm converges after 10 iterations. The results of the numerical experiments show that the usage of the proposed method makes sense. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments) and by the Russian Foundation for Basic Research (project 16-01-00548, the formulation of the problem and its study). [1] Agoshkov V. I. Methods of Optimal Control and Adjoint Equations in Problems of Mathematical Physics. INM RAS, Moscow, 2003 (in Russian). [2] Agoshkov V.I., Sheloput T.O. The study and numerical solution of the problem of heat and salinity transfer assuming 'liquid' boundaries // Russ. J. Numer. Anal. Math. Modelling. 2016. Vol. 31, No. 2. P. 71-80.
Comparison of human and algorithmic target detection in passive infrared imagery
NASA Astrophysics Data System (ADS)
Weber, Bruce A.; Hutchinson, Meredith
2003-09-01
We have designed an experiment that compares the performance of human observers and a scale-insensitive target detection algorithm that uses pixel level information for the detection of ground targets in passive infrared imagery. The test database contains targets near clutter whose detectability ranged from easy to very difficult. Results indicate that human observers detect more "easy-to-detect" targets, and with far fewer false alarms, than the algorithm. For "difficult-to-detect" targets, human and algorithm detection rates are considerably degraded, and algorithm false alarms excessive. Analysis of detections as a function of observer confidence shows that algorithm confidence attribution does not correspond to human attribution, and does not adequately correlate with correct detections. The best target detection score for any human observer was 84%, as compared to 55% for the algorithm for the same false alarm rate. At 81%, the maximum detection score for the algorithm, the same human observer had 6 false alarms per frame as compared to 29 for the algorithm. Detector ROC curves and observer-confidence analysis benchmarks the algorithm and provides insights into algorithm deficiencies and possible paths to improvement.
NASA Technical Reports Server (NTRS)
Sorenson, R. L.; Steger, J. L.
1983-01-01
An algorithm for generating computational grids about arbitrary three-dimensional bodies is developed. The elliptic partial differential equation (PDE) approach developed by Steger and Sorenson and used in the NASA computer program GRAPE is extended from two to three dimensions. Forcing functions which are found automatically by the algorithm give the user the ability to control mesh cell size and skewness at boundary surfaces. This algorithm, as is typical of PDE grid generators, gives smooth grid lines and spacing in the interior of the grid. The method is applied to a rectilinear wind-tunnel case and to two body shapes in spherical coordinates.
Parallel algorithms for boundary value problems
NASA Technical Reports Server (NTRS)
Lin, Avi
1990-01-01
A general approach to solve boundary value problems numerically in a parallel environment is discussed. The basic algorithm consists of two steps: the local step where all the P available processors work in parallel, and the global step where one processor solves a tridiagonal linear system of the order P. The main advantages of this approach are two fold. First, this suggested approach is very flexible, especially in the local step and thus the algorithm can be used with any number of processors and with any of the SIMD or MIMD machines. Secondly, the communication complexity is very small and thus can be used as easily with shared memory machines. Several examples for using this strategy are discussed.
Adaptivity and smart algorithms for fluid-structure interaction
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley
1990-01-01
This paper reviews new approaches in CFD which have the potential for significantly increasing current capabilities of modeling complex flow phenomena and of treating difficult problems in fluid-structure interaction. These approaches are based on the notions of adaptive methods and smart algorithms, which use instantaneous measures of the quality and other features of the numerical flowfields as a basis for making changes in the structure of the computational grid and of algorithms designed to function on the grid. The application of these new techniques to several problem classes are addressed, including problems with moving boundaries, fluid-structure interaction in high-speed turbine flows, flow in domains with receding boundaries, and related problems.
Adaptive envelope protection methods for aircraft
NASA Astrophysics Data System (ADS)
Unnikrishnan, Suraj
Carefree handling refers to the ability of a pilot to operate an aircraft without the need to continuously monitor aircraft operating limits. At the heart of all carefree handling or maneuvering systems, also referred to as envelope protection systems, are algorithms and methods for predicting future limit violations. Recently, envelope protection methods that have gained more acceptance, translate limit proximity information to its equivalent in the control channel. Envelope protection algorithms either use very small prediction horizon or are static methods with no capability to adapt to changes in system configurations. Adaptive approaches maximizing prediction horizon such as dynamic trim, are only applicable to steady-state-response critical limit parameters. In this thesis, a new adaptive envelope protection method is developed that is applicable to steady-state and transient response critical limit parameters. The approach is based upon devising the most aggressive optimal control profile to the limit boundary and using it to compute control limits. Pilot-in-the-loop evaluations of the proposed approach are conducted at the Georgia Tech Carefree Maneuver lab for transient longitudinal hub moment limit protection. Carefree maneuvering is the dual of carefree handling in the realm of autonomous Uninhabited Aerial Vehicles (UAVs). Designing a flight control system to fully and effectively utilize the operational flight envelope is very difficult. With the increasing role and demands for extreme maneuverability there is a need for developing envelope protection methods for autonomous UAVs. In this thesis, a full-authority automatic envelope protection method is proposed for limit protection in UAVs. The approach uses adaptive estimate of limit parameter dynamics and finite-time horizon predictions to detect impending limit boundary violations. Limit violations are prevented by treating the limit boundary as an obstacle and by correcting nominal control/command inputs to track a limit parameter safe-response profile near the limit boundary. The method is evaluated using software-in-the-loop and flight evaluations on the Georgia Tech unmanned rotorcraft platform---GTMax. The thesis also develops and evaluates an extension for calculating control margins based on restricting limit parameter response aggressiveness near the limit boundary.
Boundary Recovery For Delaunay Tetrahedral Meshes Using Local Topological Transformations
Ghadyani, Hamid; Sullivan, John; Wu, Ziji
2009-01-01
Numerous high-quality, volume mesh-generation systems exist. However, no strategy can address all geometry situations without some element qualities being compromised. Many 3D mesh generation algorithms are based on Delaunay tetrahedralization which frequently fails to preserve the input boundary surface topology. For biomedical applications, this surface preservation can be critical as they usually contain multiple material regions of interest coherently connected. In this paper we present an algorithm as a post-processing method that optimizes local regions of compromised element quality and recovers the original boundary surface facets (triangles) regardless of the original mesh generation strategy. The algorithm carves out a small sub-volume in the vicinity of the missing boundary facet or compromised element, creating a cavity. If the task is to recover a surface boundary facet, a natural exit hole in the cavity will be present. This hole is patched with the missing boundary surface face first followed by other patches to seal the cavity. If the task was to improve a compromised region, then the cavity is already sealed. Every triangular facet of the cavity shell is classified as an active face and can be connected to another shell node creating a tetrahedron. In the process the base of the tetrahedron is removed from the active face list and potentially 3 new active faces are created. This methodology is the underpinnings of our last resort method. Each active face can be viewed as the trunk of a tree. An exhaustive breath and depth search will identify all possible tetrahedral combinations to uniquely fill the cavity. We have streamlined this recursive process reducing the time complexity by orders of magnitude. The original surfaces boundaries (internal and external) are fully restored and the quality of compromised regions improved. PMID:20305743
Lymph node segmentation by dynamic programming and active contours.
Tan, Yongqiang; Lu, Lin; Bonde, Apurva; Wang, Deling; Qi, Jing; Schwartz, Lawrence H; Zhao, Binsheng
2018-03-03
Enlarged lymph nodes are indicators of cancer staging, and the change in their size is a reflection of treatment response. Automatic lymph node segmentation is challenging, as the boundary can be unclear and the surrounding structures complex. This work communicates a new three-dimensional algorithm for the segmentation of enlarged lymph nodes. The algorithm requires a user to draw a region of interest (ROI) enclosing the lymph node. Rays are cast from the center of the ROI, and the intersections of the rays and the boundary of the lymph node form a triangle mesh. The intersection points are determined by dynamic programming. The triangle mesh initializes an active contour which evolves to low-energy boundary. Three radiologists independently delineated the contours of 54 lesions from 48 patients. Dice coefficient was used to evaluate the algorithm's performance. The mean Dice coefficient between computer and the majority vote results was 83.2%. The mean Dice coefficients between the three radiologists' manual segmentations were 84.6%, 86.2%, and 88.3%. The performance of this segmentation algorithm suggests its potential clinical value for quantifying enlarged lymph nodes. © 2018 American Association of Physicists in Medicine.
Vertebra identification using template matching modelmp and K-means clustering.
Larhmam, Mohamed Amine; Benjelloun, Mohammed; Mahmoudi, Saïd
2014-03-01
Accurate vertebra detection and segmentation are essential steps for automating the diagnosis of spinal disorders. This study is dedicated to vertebra alignment measurement, the first step in a computer-aided diagnosis tool for cervical spine trauma. Automated vertebral segment alignment determination is a challenging task due to low contrast imaging and noise. A software tool for segmenting vertebrae and detecting subluxations has clinical significance. A robust method was developed and tested for cervical vertebra identification and segmentation that extracts parameters used for vertebra alignment measurement. Our contribution involves a novel combination of a template matching method and an unsupervised clustering algorithm. In this method, we build a geometric vertebra mean model. To achieve vertebra detection, manual selection of the region of interest is performed initially on the input image. Subsequent preprocessing is done to enhance image contrast and detect edges. Candidate vertebra localization is then carried out by using a modified generalized Hough transform (GHT). Next, an adapted cost function is used to compute local voted centers and filter boundary data. Thereafter, a K-means clustering algorithm is applied to obtain clusters distribution corresponding to the targeted vertebrae. These clusters are combined with the vote parameters to detect vertebra centers. Rigid segmentation is then carried out by using GHT parameters. Finally, cervical spine curves are extracted to measure vertebra alignment. The proposed approach was successfully applied to a set of 66 high-resolution X-ray images. Robust detection was achieved in 97.5 % of the 330 tested cervical vertebrae. An automated vertebral identification method was developed and demonstrated to be robust to noise and occlusion. This work presents a first step toward an automated computer-aided diagnosis system for cervical spine trauma detection.
Computation of multi-dimensional viscous supersonic jet flow
NASA Technical Reports Server (NTRS)
Kim, Y. N.; Buggeln, R. C.; Mcdonald, H.
1986-01-01
A new method has been developed for two- and three-dimensional computations of viscous supersonic flows with embedded subsonic regions adjacent to solid boundaries. The approach employs a reduced form of the Navier-Stokes equations which allows solution as an initial-boundary value problem in space, using an efficient noniterative forward marching algorithm. Numerical instability associated with forward marching algorithms for flows with embedded subsonic regions is avoided by approximation of the reduced form of the Navier-Stokes equations in the subsonic regions of the boundary layers. Supersonic and subsonic portions of the flow field are simultaneously calculated by a consistently split linearized block implicit computational algorithm. The results of computations for a series of test cases relevant to internal supersonic flow is presented and compared with data. Comparison between data and computation are in general excellent thus indicating that the computational technique has great promise as a tool for calculating supersonic flow with embedded subsonic regions. Finally, a User's Manual is presented for the computer code used to perform the calculations.
On-line Model Structure Selection for Estimation of Plasma Boundary in a Tokamak
NASA Astrophysics Data System (ADS)
Škvára, Vít; Šmídl, Václav; Urban, Jakub
2015-11-01
Control of the plasma field in the tokamak requires reliable estimation of the plasma boundary. The plasma boundary is given by a complex mathematical model and the only available measurements are responses of induction coils around the plasma. For the purpose of boundary estimation the model can be reduced to simple linear regression with potentially infinitely many elements. The number of elements must be selected manually and this choice significantly influences the resulting shape. In this paper, we investigate the use of formal model structure estimation techniques for the problem. Specifically, we formulate a sparse least squares estimator using the automatic relevance principle. The resulting algorithm is a repetitive evaluation of the least squares problem which could be computed in real time. Performance of the resulting algorithm is illustrated on simulated data and evaluated with respect to a more detailed and computationally costly model FREEBIE.
NASA Astrophysics Data System (ADS)
Fabbrini, L.; Messina, M.; Greco, M.; Pinelli, G.
2011-10-01
In the context of augmented integrity Inertial Navigation System (INS), recent technological developments have been focusing on landmark extraction from high-resolution synthetic aperture radar (SAR) images in order to retrieve aircraft position and attitude. The article puts forward a processing chain that can automatically detect linear landmarks on highresolution synthetic aperture radar (SAR) images and can be successfully exploited also in the context of augmented integrity INS. The processing chain uses constant false alarm rate (CFAR) edge detectors as the first step of the whole processing procedure. Our studies confirm that the ratio of averages (RoA) edge detector detects object boundaries more effectively than Student T-test and Wilcoxon-Mann-Whitney (WMW) test. Nevertheless, all these statistical edge detectors are sensitive to violation of the assumptions which underlie their theory. In addition to presenting a solution to the previous problem, we put forward a new post-processing algorithm useful to remove the main false alarms, to select the most probable edge position, to reconstruct broken edges and finally to vectorize them. SAR images from the "MSTAR clutter" dataset were used to prove the effectiveness of the proposed algorithms.
An Iris Segmentation Algorithm based on Edge Orientation for Off-angle Iris Recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J
Iris recognition is known as one of the most accurate and reliable biometrics. However, the accuracy of iris recognition systems depends on the quality of data capture and is negatively affected by several factors such as angle, occlusion, and dilation. In this paper, we present a segmentation algorithm for off-angle iris images that uses edge detection, edge elimination, edge classification, and ellipse fitting techniques. In our approach, we first detect all candidate edges in the iris image by using the canny edge detector; this collection contains edges from the iris and pupil boundaries as well as eyelash, eyelids, iris texturemore » etc. Edge orientation is used to eliminate the edges that cannot be part of the iris or pupil. Then, we classify the remaining edge points into two sets as pupil edges and iris edges. Finally, we randomly generate subsets of iris and pupil edge points, fit ellipses for each subset, select ellipses with similar parameters, and average to form the resultant ellipses. Based on the results from real experiments, the proposed method shows effectiveness in segmentation for off-angle iris images.« less
From Phenomena to Objects: Segmentation of Fuzzy Objects and its Application to Oceanic Eddies
NASA Astrophysics Data System (ADS)
Wu, Qingling
A challenging image analysis problem that has received limited attention to date is the isolation of fuzzy objects---i.e. those with inherently indeterminate boundaries---from continuous field data. This dissertation seeks to bridge the gap between, on the one hand, the recognized need for Object-Based Image Analysis of fuzzy remotely sensed features, and on the other, the optimization of existing image segmentation techniques for the extraction of more discretely bounded features. Using mesoscale oceanic eddies as a case study of a fuzzy object class evident in Sea Surface Height Anomaly (SSHA) imagery, the dissertation demonstrates firstly, that the widely used region-growing and watershed segmentation techniques can be optimized and made comparable in the absence of ground truth data using the principle of parsimony. However, they both have significant shortcomings, with the region growing procedure creating contour polygons that do not follow the shape of eddies while the watershed technique frequently subdivides eddies or groups together separate eddy objects. Secondly, it was determined that these problems can be remedied by using a novel Non-Euclidian Voronoi (NEV) tessellation technique. NEV is effective in isolating the extrema associated with eddies in SSHA data while using a non-Euclidian cost-distance based procedure (based on cumulative gradients in ocean height) to define the boundaries between fuzzy objects. Using this procedure as the first stage in isolating candidate eddy objects, a novel "region-shrinking" multicriteria eddy identification algorithm was developed that includes consideration of shape and vorticity. Eddies identified by this region-shrinking technique compare favorably with those identified by existing techniques, while simplifying and improving existing automated eddy detection algorithms. However, it also tends to find a larger number of eddies as a result of its ability to separate what other techniques identify as connected eddies. The research presented here is of significance not only to eddy research in oceanography, but also to other areas of Earth System Science for which the automated detection of features lacking rigid boundary definitions is of importance.
NASA Astrophysics Data System (ADS)
Yun, S.; Agram, P. S.; Fielding, E. J.; Simons, M.; Webb, F.; Tanaka, A.; Lundgren, P.; Owen, S. E.; Rosen, P. A.; Hensley, S.
2011-12-01
Under ARIA (Advanced Rapid Imaging and Analysis) project at JPL and Caltech, we developed a prototype algorithm to detect surface property change caused by natural or man-made damage using InSAR coherence change. The algorithm was tested on building demolition and construction sites in downtown Pasadena, California. The developed algorithm performed significantly better, producing 150 % higher signal-to-noise ratio, than a standard coherence change detection method. We applied the algorithm to February 2011 M6.3 Christchurch earthquake in New Zealand, 2011 M9.0 Tohoku-oki earthquake in Japan, and 2011 Kirishima volcano eruption in Kyushu, Japan, using ALOS PALSAR data. In Christchurch area we detected three different types of damage: liquefaction, building collapse, and landslide. The detected liquefaction damage is extensive in the eastern suburbs of Christchurch, showing Bexley as one of the most significantly affected areas as was reported in the media. Some places show sharp boundaries of liquefaction damage, indicating different type of ground materials that might have been formed by the meandering Avon River in the past. Well reported damaged buildings such as Christchurch Cathedral, Canterbury TV building, Pyne Gould building, and Cathedral of the Blessed Sacrament were detected by the algorithm. A landslide in Redcliffs was also clearly detected. These detected damage sites were confirmed with Google earth images provided by GeoEye. Larger-scale damage pattern also agrees well with the ground truth damage assessment map indicated with polygonal zones of 3 different damage levels, compiled by the government of New Zealand. The damage proxy map of Sendai area in Japan shows man-made structure damage due to the tsunami caused by the M9.0 Tohoku-oki earthquake. Long temporal baseline (~2.7 years) and volume scattering caused significant decorrelation in the farmlands and bush forest along the coastline. The 2011 Kirishima volcano eruption caused a lot of ash fall deposit in the southeast from the volcano. The detected ash fall damage area exactly matches the in-situ measurements implemented through fieldwork by Geological Survey of Japan. With 99-percentile threshold for damage detection, the periphery of the detected damage area aligns with a contour line of 100 kg/m2 ash deposit, equivalent to 10 cm of depth assuming a density of 1000 kg/m3 for the ash layer. With growing number of InSAR missions, rapidly produced accurate damage assessment maps will help save people, assisting effective prioritization of rescue operations at early stage of response, and significantly improve timely situational awareness for emergency management and national / international assessment and response for recovery planning. Results of this study will also inform the design of future InSAR missions including the proposed DESDynI.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Forsyth, David S.; Welter, John T.
2016-02-01
To address the data review burden and improve the reliability of the ultrasonic inspection of large composite structures, automated data analysis (ADA) algorithms have been developed to make calls on indications that satisfy the detection criteria and minimize false calls. The original design followed standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. However, certain complex panels with varying shape, ply drops and the presence of bonds can complicate this interpretation process. In this paper, enhancements to the automated data analysis algorithms are introduced to address these challenges. To estimate the thickness of the part and presence of bonds without prior information, an algorithm tracks potential backwall or bond-line signals, and evaluates a combination of spatial, amplitude, and time-of-flight metrics to identify bonded sections. Once part boundaries, thickness transitions and bonded regions are identified, feature extraction algorithms are applied to multiple sets of through-thickness and backwall C-scan images, for evaluation of both first layer through thickness and layers under bonds. ADA processing results are presented for a variety of complex test specimens with inserted materials and other test discontinuities. Lastly, enhancements to the ADA software interface are presented, which improve the software usability for final data review by the inspectors and support the certification process.
Zhang, Yanju; Lameijer, Eric-Wubbo; 't Hoen, Peter A C; Ning, Zemin; Slagboom, P Eline; Ye, Kai
2012-02-15
RNA-seq is a powerful technology for the study of transcriptome profiles that uses deep-sequencing technologies. Moreover, it may be used for cellular phenotyping and help establishing the etiology of diseases characterized by abnormal splicing patterns. In RNA-Seq, the exact nature of splicing events is buried in the reads that span exon-exon boundaries. The accurate and efficient mapping of these reads to the reference genome is a major challenge. We developed PASSion, a pattern growth algorithm-based pipeline for splice site detection in paired-end RNA-Seq reads. Comparing the performance of PASSion to three existing RNA-Seq analysis pipelines, TopHat, MapSplice and HMMSplicer, revealed that PASSion is competitive with these packages. Moreover, the performance of PASSion is not affected by read length and coverage. It performs better than the other three approaches when detecting junctions in highly abundant transcripts. PASSion has the ability to detect junctions that do not have known splicing motifs, which cannot be found by the other tools. Of the two public RNA-Seq datasets, PASSion predicted ≈ 137,000 and 173,000 splicing events, of which on average 82 are known junctions annotated in the Ensembl transcript database and 18% are novel. In addition, our package can discover differential and shared splicing patterns among multiple samples. The code and utilities can be freely downloaded from https://trac.nbic.nl/passion and ftp://ftp.sanger.ac.uk/pub/zn1/passion.
Improved Boundary Layer Depth Retrievals from MPLNET
NASA Technical Reports Server (NTRS)
Lewis, Jasper R.; Welton, Ellsworth J.; Molod, Andrea M.; Joseph, Everette
2013-01-01
Continuous lidar observations of the planetary boundary layer (PBL) depth have been made at the Micropulse Lidar Network (MPLNET) site in Greenbelt, MD since April 2001. However, because of issues with the operational PBL depth algorithm, the data is not reliable for determining seasonal and diurnal trends. Therefore, an improved PBL depth algorithm has been developed which uses a combination of the wavelet technique and image processing. The new algorithm is less susceptible to contamination by clouds and residual layers, and in general, produces lower PBL depths. A 2010 comparison shows the operational algorithm overestimates the daily mean PBL depth when compared to the improved algorithm (1.85 and 1.07 km, respectively). The improved MPLNET PBL depths are validated using radiosonde comparisons which suggests the algorithm performs well to determine the depth of a fully developed PBL. A comparison with the Goddard Earth Observing System-version 5 (GEOS-5) model suggests that the model may underestimate the maximum daytime PBL depth by 410 m during the spring and summer. The best agreement between MPLNET and GEOS-5 occurred during the fall and they diered the most in the winter.
Morpheme matching based text tokenization for a scarce resourced language.
Rehman, Zobia; Anwar, Waqas; Bajwa, Usama Ijaz; Xuan, Wang; Chaoying, Zhou
2013-01-01
Text tokenization is a fundamental pre-processing step for almost all the information processing applications. This task is nontrivial for the scarce resourced languages such as Urdu, as there is inconsistent use of space between words. In this paper a morpheme matching based approach has been proposed for Urdu text tokenization, along with some other algorithms to solve the additional issues of boundary detection of compound words, affixation, reduplication, names and abbreviations. This study resulted into 97.28% precision, 93.71% recall, and 95.46% F1-measure; while tokenizing a corpus of 57000 words by using a morpheme list with 6400 entries.
Morpheme Matching Based Text Tokenization for a Scarce Resourced Language
Rehman, Zobia; Anwar, Waqas; Bajwa, Usama Ijaz; Xuan, Wang; Chaoying, Zhou
2013-01-01
Text tokenization is a fundamental pre-processing step for almost all the information processing applications. This task is nontrivial for the scarce resourced languages such as Urdu, as there is inconsistent use of space between words. In this paper a morpheme matching based approach has been proposed for Urdu text tokenization, along with some other algorithms to solve the additional issues of boundary detection of compound words, affixation, reduplication, names and abbreviations. This study resulted into 97.28% precision, 93.71% recall, and 95.46% F1-measure; while tokenizing a corpus of 57000 words by using a morpheme list with 6400 entries. PMID:23990871
Comparison Between CCCM and CloudSat Radar-Lidar (RL) Cloud and Radiation Products
NASA Technical Reports Server (NTRS)
Ham, Seung-Hee; Kato, Seiji; Rose, Fred G.; Sun-Mack, Sunny
2015-01-01
To enhance cloud properties, LaRC and CIRA developed each combination algorithm for obtained properties from passive, active and imager in A-satellite constellation. When comparing global cloud fraction each other, LaRC-produced CERES-CALIPSO-CloudSat-MODIS (CCCM) products larger low-level cloud fraction over tropic ocean, while CIRA-produced Radar-Lidar (RL) shows larger mid-level cloud fraction for high latitude region. The reason for different low-level cloud fraction is due to different filtering method of lidar-detected cloud layers. Meanwhile difference in mid-level clouds is occurred due to different priority of cloud boundaries from lidar and radar.
NASA Astrophysics Data System (ADS)
Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille C.; Moxley, Katherine M.; Moore, Kathleen; Mannel, Robert S.; Cheng, Samuel; Liu, Hong; Zheng, Bin; Qiu, Yuchen
2017-02-01
Accurate tumor segmentation is a critical step in the development of the computer-aided detection (CAD) based quantitative image analysis scheme for early stage prognostic evaluation of ovarian cancer patients. The purpose of this investigation is to assess the efficacy of several different methods to segment the metastatic tumors occurred in different organs of ovarian cancer patients. In this study, we developed a segmentation scheme consisting of eight different algorithms, which can be divided into three groups: 1) Region growth based methods; 2) Canny operator based methods; and 3) Partial differential equation (PDE) based methods. A number of 138 tumors acquired from 30 ovarian cancer patients were used to test the performance of these eight segmentation algorithms. The results demonstrate each of the tested tumors can be successfully segmented by at least one of the eight algorithms without the manual boundary correction. Furthermore, modified region growth, classical Canny detector, and fast marching, and threshold level set algorithms are suggested in the future development of the ovarian cancer related CAD schemes. This study may provide meaningful reference for developing novel quantitative image feature analysis scheme to more accurately predict the response of ovarian cancer patients to the chemotherapy at early stage.
Automated kidney morphology measurements from ultrasound images using texture and edge analysis
NASA Astrophysics Data System (ADS)
Ravishankar, Hariharan; Annangi, Pavan; Washburn, Michael; Lanning, Justin
2016-04-01
In a typical ultrasound scan, a sonographer measures Kidney morphology to assess renal abnormalities. Kidney morphology can also help to discriminate between chronic and acute kidney failure. The caliper placements and volume measurements are often time consuming and an automated solution will help to improve accuracy, repeatability and throughput. In this work, we developed an automated Kidney morphology measurement solution from long axis Ultrasound scans. Automated kidney segmentation is challenging due to wide variability in kidney shape, size, weak contrast of the kidney boundaries and presence of strong edges like diaphragm, fat layers. To address the challenges and be able to accurately localize and detect kidney regions, we present a two-step algorithm that makes use of edge and texture information in combination with anatomical cues. First, we use an edge analysis technique to localize kidney region by matching the edge map with predefined templates. To accurately estimate the kidney morphology, we use textural information in a machine learning algorithm framework using Haar features and Gradient boosting classifier. We have tested the algorithm on 45 unseen cases and the performance against ground truth is measured by computing Dice overlap, % error in major and minor axis of kidney. The algorithm shows successful performance on 80% cases.
Tumor propagation model using generalized hidden Markov model
NASA Astrophysics Data System (ADS)
Park, Sun Young; Sargent, Dustin
2017-02-01
Tumor tracking and progression analysis using medical images is a crucial task for physicians to provide accurate and efficient treatment plans, and monitor treatment response. Tumor progression is tracked by manual measurement of tumor growth performed by radiologists. Several methods have been proposed to automate these measurements with segmentation, but many current algorithms are confounded by attached organs and vessels. To address this problem, we present a new generalized tumor propagation model considering time-series prior images and local anatomical features using a Hierarchical Hidden Markov model (HMM) for tumor tracking. First, we apply the multi-atlas segmentation technique to identify organs/sub-organs using pre-labeled atlases. Second, we apply a semi-automatic direct 3D segmentation method to label the initial boundary between the lesion and neighboring structures. Third, we detect vessels in the ROI surrounding the lesion. Finally, we apply the propagation model with the labeled organs and vessels to accurately segment and measure the target lesion. The algorithm has been designed in a general way to be applicable to various body parts and modalities. In this paper, we evaluate the proposed algorithm on lung and lung nodule segmentation and tracking. We report the algorithm's performance by comparing the longest diameter and nodule volumes using the FDA lung Phantom data and a clinical dataset.
NASA Astrophysics Data System (ADS)
Haslauer, C. P.; Bárdossy, A.; Sudicky, E. A.
2017-09-01
This paper demonstrates quantitative reasoning to separate the dataset of spatially distributed variables into different entities and subsequently characterize their geostatistical properties, properly. The main contribution of the paper is a statistical based algorithm that matches the manual distinction results. This algorithm is based on measured data and is generally applicable. In this paper, it is successfully applied at two datasets of saturated hydraulic conductivity (K) measured at the Borden (Canada) and the Lauswiesen (Germany) aquifers. The boundary layer was successfully delineated at Borden despite its only mild heterogeneity and only small statistical differences between the divided units. The methods are verified with the more heterogeneous Lauswiesen aquifer K data-set, where a boundary layer has previously been delineated. The effects of the macro- and the microstructure on solute transport behaviour are evaluated using numerical solute tracer experiments. Within the microscale structure, both Gaussian and non-Gaussian models of spatial dependence of K are evaluated. The effects of heterogeneity both on the macro- and the microscale are analysed using numerical tracer experiments based on four scenarios: including or not including the macroscale structures and optimally fitting a Gaussian or a non-Gaussian model for the spatial dependence in the micro-structure. The paper shows that both micro- and macro-scale structures are important, as in each of the four possible geostatistical scenarios solute transport behaviour differs meaningfully.
NASA Astrophysics Data System (ADS)
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Vincent W.C., E-mail: htvinwu@polyu.edu.hk; Tse, Teddy K.H.; Ho, Cola L.M.
2013-07-01
Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each casemore » by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.« less
Launch flexibility using NLP guidance and remote wind sensing
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
This paper examines the use of lidar wind measurements in the implementation of a guidance strategy for a nonlinear programming (NLP) launch guidance algorithm. The NLP algorithm uses B-spline command function representation for flexibility in the design of the guidance steering commands. Using this algorithm, the guidance system solves a two-point boundary value problem at each guidance update. The specification of different boundary value problems at each guidance update provides flexibility that can be used in the design of the guidance strategy. The algorithm can use lidar wind measurements for on pad guidance retargeting and for load limiting guidance steering commands. Examples presented in the paper use simulated wind updates to correct wind induced final orbit errors and to adjust the guidance steering commands to limit the product of the dynamic pressure and angle-of-attack for launch vehicle load alleviation.
Liu, Yan Xu; Peng, Jian; Sun, Mao Long; Yang, Yang
2016-08-01
Urban growth boundary, with full consideration of regional ecological constraints, can effectively control the unordered urban sprawl. Thus, urban growth boundary is a significant planning concept integrating regional ecological protection and urban construction. Finding the preferential position for urban construction, as well as controlling the ecological risk, has always been the core content of urban growth boundary delimitation. This study selected Taibai Lake New District in Jining City as a case area, and analyzed the scenario of ecological suitability by ordered weighted ave-raging algorithm. Surface temperature retrieval and rain flooding simulation were used to identify the spatial ecological risk. In the result of ecological suitability, the suitable construction zone accounted for 25.3% of the total area, the unsuitable construction zone accounted for 20.4%, and the other area was in the limit construction zone. Excluding the ecological risk control region, the flexible urban growth boundary covered 2975 hm 2 in near term, and covered 6754 hm 2 in long term. The final inflexible urban growth boundary covered 9405 hm 2 . As a new method, the scenario algorithms of ordered weighted averaging and ecological risk modeling could provide effective support in urban growth boundary identification.
SLIC superpixels compared to state-of-the-art superpixel methods.
Achanta, Radhakrishna; Shaji, Appu; Smith, Kevin; Lucchi, Aurelien; Fua, Pascal; Süsstrunk, Sabine
2012-11-01
Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.
NASA Astrophysics Data System (ADS)
Sudicky, E. A.; Unger, A. J. A.; Lacombe, S.
1995-02-01
A noniterative algorithm for handling prescribed well bore boundary conditions while pumping or injecting fluid in a three-dimensional heterogeneous aquifer is described. The algorithm is formulated by superimposing conductive one-dimensional line elements representing the well screen onto the three-dimensional matrix elements epresenting the aquifer. Storage in the well casing is also naturally accommodated by the superposition of the line elements. The numerical algorithm is verified by comparison with results obtained from the solution of Papadopulos and Cooper (1967). A large-scale example problem involving groundwater extraction from a partially penetrating pumping well located in a highly heterogeneous confined aquifer is presented to demonstrate the utility of the approach.
Automated Segmentation of Nuclei in Breast Cancer Histopathology Images.
Paramanandam, Maqlin; O'Byrne, Michael; Ghosh, Bidisha; Mammen, Joy John; Manipadam, Marie Therese; Thamburaj, Robinson; Pakrashi, Vikram
2016-01-01
The process of Nuclei detection in high-grade breast cancer images is quite challenging in the case of image processing techniques due to certain heterogeneous characteristics of cancer nuclei such as enlarged and irregularly shaped nuclei, highly coarse chromatin marginalized to the nuclei periphery and visible nucleoli. Recent reviews state that existing techniques show appreciable segmentation accuracy on breast histopathology images whose nuclei are dispersed and regular in texture and shape; however, typical cancer nuclei are often clustered and have irregular texture and shape properties. This paper proposes a novel segmentation algorithm for detecting individual nuclei from Hematoxylin and Eosin (H&E) stained breast histopathology images. This detection framework estimates a nuclei saliency map using tensor voting followed by boundary extraction of the nuclei on the saliency map using a Loopy Back Propagation (LBP) algorithm on a Markov Random Field (MRF). The method was tested on both whole-slide images and frames of breast cancer histopathology images. Experimental results demonstrate high segmentation performance with efficient precision, recall and dice-coefficient rates, upon testing high-grade breast cancer images containing several thousand nuclei. In addition to the optimal performance on the highly complex images presented in this paper, this method also gave appreciable results in comparison with two recently published methods-Wienert et al. (2012) and Veta et al. (2013), which were tested using their own datasets.
Automated Segmentation of Nuclei in Breast Cancer Histopathology Images
Paramanandam, Maqlin; O’Byrne, Michael; Ghosh, Bidisha; Mammen, Joy John; Manipadam, Marie Therese; Thamburaj, Robinson; Pakrashi, Vikram
2016-01-01
The process of Nuclei detection in high-grade breast cancer images is quite challenging in the case of image processing techniques due to certain heterogeneous characteristics of cancer nuclei such as enlarged and irregularly shaped nuclei, highly coarse chromatin marginalized to the nuclei periphery and visible nucleoli. Recent reviews state that existing techniques show appreciable segmentation accuracy on breast histopathology images whose nuclei are dispersed and regular in texture and shape; however, typical cancer nuclei are often clustered and have irregular texture and shape properties. This paper proposes a novel segmentation algorithm for detecting individual nuclei from Hematoxylin and Eosin (H&E) stained breast histopathology images. This detection framework estimates a nuclei saliency map using tensor voting followed by boundary extraction of the nuclei on the saliency map using a Loopy Back Propagation (LBP) algorithm on a Markov Random Field (MRF). The method was tested on both whole-slide images and frames of breast cancer histopathology images. Experimental results demonstrate high segmentation performance with efficient precision, recall and dice-coefficient rates, upon testing high-grade breast cancer images containing several thousand nuclei. In addition to the optimal performance on the highly complex images presented in this paper, this method also gave appreciable results in comparison with two recently published methods—Wienert et al. (2012) and Veta et al. (2013), which were tested using their own datasets. PMID:27649496
Feng, Haihua; Karl, William Clem; Castañon, David A
2008-05-01
In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.
Computer simulation of solutions of polyharmonic equations in plane domain
NASA Astrophysics Data System (ADS)
Kazakova, A. O.
2018-05-01
A systematic study of plane problems of the theory of polyharmonic functions is presented. A method of reducing boundary problems for polyharmonic functions to the system of integral equations on the boundary of the domain is given and a numerical algorithm for simulation of solutions of this system is suggested. Particular attention is paid to the numerical solution of the main tasks when the values of the function and its derivatives are given. Test examples are considered that confirm the effectiveness and accuracy of the suggested algorithm.
Conservative treatment of boundary interfaces for overlaid grids and multi-level grid adaptations
NASA Technical Reports Server (NTRS)
Moon, Young J.; Liou, Meng-Sing
1989-01-01
Conservative algorithms for boundary interfaces of overlaid grids are presented. The basic method is zeroth order, and is extended to a higher order method using interpolation and subcell decomposition. The present method, strictly based on a conservative constraint, is tested with overlaid grids for various applications of unsteady and steady supersonic inviscid flows with strong shock waves. The algorithm is also applied to a multi-level grid adaptation in which the next level finer grid is overlaid on the coarse base grid with an arbitrary orientation.
Geometric and shading correction for images of printed materials using boundary.
Brown, Michael S; Tsoi, Yau-Chat
2006-06-01
A novel technique that uses boundary interpolation to correct geometric distortion and shading artifacts present in images of printed materials is presented. Unlike existing techniques, our algorithm can simultaneously correct a variety of geometric distortions, including skew, fold distortion, binder curl, and combinations of these. In addition, the same interpolation framework can be used to estimate the intrinsic illumination component of the distorted image to correct shading artifacts. We detail our algorithm for geometric and shading correction and demonstrate its usefulness on real-world and synthetic data.
Fabelo, Himar; Ortega, Samuel; Ravi, Daniele; Kiran, B Ravi; Sosa, Coralia; Bulters, Diederik; Callicó, Gustavo M; Bulstrode, Harry; Szolna, Adam; Piñeiro, Juan F; Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O'Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto
2018-01-01
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area.
Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O’Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto
2018-01-01
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area. PMID:29554126
Siri, Sangeeta K; Latte, Mrityunjaya V
2017-11-01
Many different diseases can occur in the liver, including infections such as hepatitis, cirrhosis, cancer and over effect of medication or toxins. The foremost stage for computer-aided diagnosis of liver is the identification of liver region. Liver segmentation algorithms extract liver image from scan images which helps in virtual surgery simulation, speedup the diagnosis, accurate investigation and surgery planning. The existing liver segmentation algorithms try to extort exact liver image from abdominal Computed Tomography (CT) scan images. It is an open problem because of ambiguous boundaries, large variation in intensity distribution, variability of liver geometry from patient to patient and presence of noise. A novel approach is proposed to meet challenges in extracting the exact liver image from abdominal CT scan images. The proposed approach consists of three phases: (1) Pre-processing (2) CT scan image transformation to Neutrosophic Set (NS) and (3) Post-processing. In pre-processing, the noise is removed by median filter. The "new structure" is designed to transform a CT scan image into neutrosophic domain which is expressed using three membership subset: True subset (T), False subset (F) and Indeterminacy subset (I). This transform approximately extracts the liver image structure. In post processing phase, morphological operation is performed on indeterminacy subset (I) and apply Chan-Vese (C-V) model with detection of initial contour within liver without user intervention. This resulted in liver boundary identification with high accuracy. Experiments show that, the proposed method is effective, robust and comparable with existing algorithm for liver segmentation of CT scan images. Copyright © 2017 Elsevier B.V. All rights reserved.
A novel method for retinal optic disc detection using bat meta-heuristic algorithm.
Abdullah, Ahmad S; Özok, Yasa Ekşioğlu; Rahebi, Javad
2018-05-09
Normally, the optic disc detection of retinal images is useful during the treatment of glaucoma and diabetic retinopathy. In this paper, the novel preprocessing of a retinal image with a bat algorithm (BA) optimization is proposed to detect the optic disc of the retinal image. As the optic disk is a bright area and the vessels that emerge from it are dark, these facts lead to the selected segments being regions with a great diversity of intensity, which does not usually happen in pathological regions. First, in the preprocessing stage, the image is fully converted into a gray image using a gray scale conversion, and then morphological operations are implemented in order to remove dark elements such as blood vessels, from the images. In the next stage, a bat algorithm (BA) is used to find the optimum threshold value for the optic disc location. In order to improve the accuracy and to obtain the best result for the segmented optic disc, the ellipse fitting approach was used in the last stage to enhance and smooth the segmented optic disc boundary region. The ellipse fitting is carried out using the least square distance approach. The efficiency of the proposed method was tested on six publicly available datasets, MESSIDOR, DRIVE, DIARETDB1, DIARETDB0, STARE, and DRIONS-DB. The optic disc segmentation average overlaps and accuracy was in the range of 78.5-88.2% and 96.6-99.91% in these six databases. The optic disk of the retinal images was segmented in less than 2.1 s per image. The use of the proposed method improved the optic disc segmentation results for healthy and pathological retinal images in a low computation time. Graphical abstract ᅟ.
Renversade, Loïc; Quey, Romain; Ludwig, Wolfgang; Menasche, David; Maddali, Siddharth; Suter, Robert M; Borbély, András
2016-01-01
The grain structure of an Al-0.3 wt%Mn alloy deformed to 1% strain was reconstructed using diffraction contrast tomography (DCT) and high-energy diffraction microscopy (HEDM). 14 equally spaced HEDM layers were acquired and their exact location within the DCT volume was determined using a generic algorithm minimizing a function of the local disorientations between the two data sets. The microstructures were then compared in terms of the mean crystal orientations and shapes of the grains. The comparison shows that DCT can detect subgrain boundaries with disorientations as low as 1° and that HEDM and DCT grain boundaries are on average 4 µm apart from each other. The results are important for studies targeting the determination of grain volume. For the case of a polycrystal with an average grain size of about 100 µm, a relative deviation of about ≤10% was found between the two techniques.
NASA Astrophysics Data System (ADS)
Melis, M. T.; Mundula, F.; DessÌ, F.; Cioni, R.; Funedda, A.
2014-09-01
Unequivocal delimitation of landforms is an important issue for different purposes, from science-driven morphometric analysis to legal issues related to land conservation. This study is aimed at giving a new contribution to the morphometric approach for the delineation of the boundaries of volcanic edifices, applied to 13 monogenetic volcanoes (scoria cones) related to the Pliocene-Pleistocene volcanic cycle in Sardinia (Italy). External boundary delimitation of the edifices is discussed based on an integrated methodology using automatic elaboration of digital elevation models together with geomorphological and geological observations. Different elaborations of surface slope and profile curvature have been proposed and discussed; among them, two algorithms based on simple mathematical functions combining slope and profile curvature well fit the requirements of this study. One of theses algorithms is a modification of a function introduced by Grosse et al. (2011), which better performs for recognizing and tracing the boundary between the volcanic scoria cone and its basement. Although the geological constraints still drive the final decision, the proposed method improves the existing tools for a semi-automatic tracing of the boundaries.
NASA Astrophysics Data System (ADS)
Melis, M. T.; Mundula, F.; Dessì, F.; Cioni, R.; Funedda, A.
2014-05-01
Unequivocal delimitation of landforms is an important issue for different purposes, from science-driven morphometric analysis to legal issues related to land conservation. This study is aimed at giving a new contribution to the morphometric approach for the delineation of the boundaries of volcanic edifices, applied to 13 monogenetic volcanoes (scoria cones) related to the Pliocene-Pleistocene volcanic cycle in Sardinia (Italy). External boundary delimitation of the edifices is discussed based on an integrated methodology using automatic elaboration of digital elevation models together with geomorphological and geological observations. Different elaborations of surface slope and profile curvature have been proposed and discussed; among them, two algorithms based on simple mathematical functions combining slope and profile curvature well fit the requirements of this study. One of theses algorithms is a modification of a function already discussed by Grosse et al. (2011), which better perform for recognizing and tracing the boundary between the volcanic scoria cone and its basement. Although the geological constraints still drive the final decision, the proposed method improves the existing tools for a semi-automatic tracing of the boundaries.
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Chen, Hai-Bo; Chen, Lei-Lei
2013-04-01
This paper presents a novel wideband fast multipole boundary element approach to 3D half-space/plane-symmetric acoustic wave problems. The half-space fundamental solution is employed in the boundary integral equations so that the tree structure required in the fast multipole algorithm is constructed for the boundary elements in the real domain only. Moreover, a set of symmetric relations between the multipole expansion coefficients of the real and image domains are derived, and the half-space fundamental solution is modified for the purpose of applying such relations to avoid calculating, translating and saving the multipole/local expansion coefficients of the image domain. The wideband adaptive multilevel fast multipole algorithm associated with the iterative solver GMRES is employed so that the present method is accurate and efficient for both lowand high-frequency acoustic wave problems. As for exterior acoustic problems, the Burton-Miller method is adopted to tackle the fictitious eigenfrequency problem involved in the conventional boundary integral equation method. Details on the implementation of the present method are described, and numerical examples are given to demonstrate its accuracy and efficiency.
Immersed boundary methods for simulating fluid-structure interaction
NASA Astrophysics Data System (ADS)
Sotiropoulos, Fotis; Yang, Xiaolei
2014-02-01
Fluid-structure interaction (FSI) problems commonly encountered in engineering and biological applications involve geometrically complex flexible or rigid bodies undergoing large deformations. Immersed boundary (IB) methods have emerged as a powerful simulation tool for tackling such flows due to their inherent ability to handle arbitrarily complex bodies without the need for expensive and cumbersome dynamic re-meshing strategies. Depending on the approach such methods adopt to satisfy boundary conditions on solid surfaces they can be broadly classified as diffused and sharp interface methods. In this review, we present an overview of the fundamentals of both classes of methods with emphasis on solution algorithms for simulating FSI problems. We summarize and juxtapose different IB approaches for imposing boundary conditions, efficient iterative algorithms for solving the incompressible Navier-Stokes equations in the presence of dynamic immersed boundaries, and strong and loose coupling FSI strategies. We also present recent results from the application of such methods to study a wide range of problems, including vortex-induced vibrations, aquatic swimming, insect flying, human walking and renewable energy. Limitations of such methods and the need for future research to mitigate them are also discussed.
Recursive recovery of Markov transition probabilities from boundary value data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patch, Sarah Kathyrn
1994-04-01
In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requiresmore » finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 x 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 x 2 x 2 problem, is solved.« less
Szabo, Linda; Morey, Robert; Palpant, Nathan J; Wang, Peter L; Afari, Nastaran; Jiang, Chuan; Parast, Mana M; Murry, Charles E; Laurent, Louise C; Salzman, Julia
2015-06-16
The pervasive expression of circular RNA is a recently discovered feature of gene expression in highly diverged eukaryotes, but the functions of most circular RNAs are still unknown. Computational methods to discover and quantify circular RNA are essential. Moreover, discovering biological contexts where circular RNAs are regulated will shed light on potential functional roles they may play. We present a new algorithm that increases the sensitivity and specificity of circular RNA detection by discovering and quantifying circular and linear RNA splicing events at both annotated and un-annotated exon boundaries, including intergenic regions of the genome, with high statistical confidence. Unlike approaches that rely on read count and exon homology to determine confidence in prediction of circular RNA expression, our algorithm uses a statistical approach. Using our algorithm, we unveiled striking induction of general and tissue-specific circular RNAs, including in the heart and lung, during human fetal development. We discover regions of the human fetal brain, such as the frontal cortex, with marked enrichment for genes where circular RNA isoforms are dominant. The vast majority of circular RNA production occurs at major spliceosome splice sites; however, we find the first examples of developmentally induced circular RNAs processed by the minor spliceosome, and an enriched propensity of minor spliceosome donors to splice into circular RNA at un-annotated, rather than annotated, exons. Together, these results suggest a potentially significant role for circular RNA in human development.
M, Soorya; Issac, Ashish; Dutta, Malay Kishore
2018-02-01
Glaucoma is an ocular disease which can cause irreversible blindness. The disease is currently identified using specialized equipment operated by optometrists manually. The proposed work aims to provide an efficient imaging solution which can help in automating the process of Glaucoma diagnosis using computer vision techniques from digital fundus images. The proposed method segments the optic disc using a geometrical feature based strategic framework which improves the detection accuracy and makes the algorithm invariant to illumination and noise. Corner thresholding and point contour joining based novel methods are proposed to construct smooth contours of Optic Disc. Based on a clinical approach as used by ophthalmologist, the proposed algorithm tracks blood vessels inside the disc region and identifies the points at which first vessel bend from the optic disc boundary and connects them to obtain the contours of Optic Cup. The proposed method has been compared with the ground truth marked by the medical experts and the similarity parameters, used to determine the performance of the proposed method, have yield a high similarity of segmentation. The proposed method has achieved a macro-averaged f-score of 0.9485 and accuracy of 97.01% in correctly classifying fundus images. The proposed method is clinically significant and can be used for Glaucoma screening over a large population which will work in a real time. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2017-01-01
This paper presents results from numerical experiments for controlling the error caused by a damping layer boundary treatment when simulating the propagation of an acoustic signal from a continuous pressure source. The computations are with the 2D Linearized Euler Equations (LEE) for both a uniform mean flow and a steady parallel jet. The numerical experiments are with algorithms that are third, fifth, seventh and ninth order accurate in space and time. The numerical domain is enclosed in a damping layer boundary treatment. The damping is implemented in a time accurate manner, with simple polynomial damping profiles of second, fourth, sixth and eighth power. At the outer boundaries of the damping layer the propagating solution is uniformly set to zero. The complete boundary treatment is remarkably simple and intrinsically independant from the dimension of the spatial domain. The reported results show the relative effect on the error from the boundary treatment by varying the damping layer width, damping profile power, damping amplitude, propagtion time, grid resolution and algorithm order. The issue that is being addressed is not the accuracy of the numerical solution when compared to a mathematical solution, but the effect of the complete boundary treatment on the numerical solution, and to what degree the error in the numerical solution from the complete boundary treatment can be controlled. We report maximum relative absolute errors from just the boundary treatment that range from O[10-2] to O[10-7].
Greenhouse Gas Concentration Data Recovery Algorithm for a Low Cost, Laser Heterodyne Radiometer
NASA Astrophysics Data System (ADS)
Miller, J. H.; Melroy, H.; Ott, L.; McLinden, M. L.; Holben, B. N.; Wilson, E. L.
2012-12-01
The goal of a coordinated effort between groups at GWU and NASA GSFC is the development of a low-cost, global, surface instrument network that continuously monitors three key carbon cycle gases in the atmospheric column: carbon dioxide (CO2), methane (CH4), carbon monoxide (CO), as well as oxygen (O2) for atmospheric pressure profiles. The network will implement a low-cost, miniaturized, laser heterodyne radiometer (mini-LHR) that has recently been developed at NASA Goddard Space Flight Center. This mini-LHR is designed to operate in tandem with the passive aerosol sensor currently used in AERONET (a well established network of more than 450 ground aerosol monitoring instruments worldwide), and could be rapidly deployed into this established global network. Laser heterodyne radiometry is a well-established technique for detecting weak signals that was adapted from radio receiver technology. Here, a weak light signal, that has undergone absorption by atmospheric components, is mixed with light from a distributed feedback (DFB) telecommunications laser on a single-mode optical fiber. The RF component of the signal is detected on a fast photoreceiver. Scanning the laser through an absorption feature in the infrared, results in a scanned heterodyne signal in the RF. Deconvolution of this signal through the retrieval algorithm allows for the extraction of altitude contributions to the column signal. The retrieval algorithm is based on a spectral simulation program, SpecSyn, developed at GWU for high-resolution infrared spectroscopies. Variations in pressure, temperature, composition, and refractive index through the atmosphere; that are all functions of latitude, longitude, time of day, altitude, etc.; are modeled using algorithms developed in the MODTRAN program developed in part by the US Air Force Research Laboratory. In these calculations the atmosphere is modeled as a series of spherically symmetric shells with boundaries specified at defined altitudes. Temperature, pressure, and species mixing ratios are defined at these boundaries. Between the boundaries, temperature is assumed to vary linearly with altitude while pressure (and thus gas density) vary exponentially. The observed spectrum at the LHR instrument will be the integration of the contributions along this light path. For any absorption measurement the signal at a particular spectral frequency is a linear combination of spectral line contributions from several species. For each species that might absorb in a spectral region, we have pre-calculated its contribution as a function of temperature and pressure. The integrated path absorption spectrum can then by calculated using the initial sun angle (from location, date, and time) and assumptions about pressure and temperature profiles from an atmospheric model. The modeled spectrum is iterated to match the experimental observation using standard multilinear regression techniques. In addition to the layer concentrations, the numerical technique also provides uncertainty estimates for these quantities as well as dependencies on assumptions inherent in the atmospheric models.
Greenhouse Gas Concentration Data Recovery Algorithm for a Low Cost, Laser Heterodyne Radiometer
NASA Technical Reports Server (NTRS)
Miller, J. Houston; Melroy, Hilary R.; Ott, Lesley E.; Mclinden, Matthew L.; Holben, Brent; Wilson, Emily L.
2012-01-01
The goal of a coordinated effort between groups at GWU and NASA GSFC is the development of a low-cost, global, surface instrument network that continuously monitors three key carbon cycle gases in the atmospheric column: carbon dioxide (CO2), methane (CH4), carbon monoxide (CO), as well as oxygen (O2) for atmospheric pressure profiles. The network will implement a low-cost, miniaturized, laser heterodyne radiometer (mini-LHR) that has recently been developed at NASA Goddard Space Flight Center. This mini-LHR is designed to operate in tandem with the passive aerosol sensor currently used in AERONET (a well established network of more than 450 ground aerosol monitoring instruments worldwide), and could be rapidly deployed into this established global network. Laser heterodyne radiometry is a well-established technique for detecting weak signals that was adapted from radio receiver technology. Here, a weak light signal, that has undergone absorption by atmospheric components, is mixed with light from a distributed feedback (DFB) telecommunications laser on a single-mode optical fiber. The RF component of the signal is detected on a fast photoreceiver. Scanning the laser through an absorption feature in the infrared, results in a scanned heterodyne signal io the RF. Deconvolution of this signal through the retrieval algorithm allows for the extraction of altitude contributions to the column signal. The retrieval algorithm is based on a spectral simulation program, SpecSyn, developed at GWU for high-resolution infrared spectroscopies. Variations io pressure, temperature, composition, and refractive index through the atmosphere; that are all functions of latitude, longitude, time of day, altitude, etc.; are modeled using algorithms developed in the MODTRAN program developed in part by the US Air Force Research Laboratory. In these calculations the atmosphere is modeled as a series of spherically symmetric shells with boundaries specified at defined altitudes. Temperature, pressure, and species mixing ratios are defined at these boundaries. Between the boundaries, temperature is assumed to vary linearly with altitude while pressure (and thus gas density) vary exponentially. The observed spectrum at the LHR instrument will be the integration of the contributions along this light path. For any absorption measurement the signal at a particular spectral frequency is a linear combination of spectral line contributions from several species. For each species that might absorb in a spectral region, we have pre-calculated its contribution as a function of temperature and pressure. The integrated path absorption spectrum can then by calculated using the initial sun angle (from location, date, and time) and assumptions about pressure and temperature profiles from an atmospheric model. The modeled spectrum is iterated to match the experimental observation using standard multilinear regression techniques. In addition to the layer concentrations, the numerical technique also provides uncertainty estimates for these quantities as well as dependencies on assumptions inherent in the atmospheric models.
Robust moving mesh algorithms for hybrid stretched meshes: Application to moving boundaries problems
NASA Astrophysics Data System (ADS)
Landry, Jonathan; Soulaïmani, Azzeddine; Luke, Edward; Ben Haj Ali, Amine
2016-12-01
A robust Mesh-Mover Algorithm (MMA) approach is designed to adapt meshes of moving boundaries problems. A new methodology is developed from the best combination of well-known algorithms in order to preserve the quality of initial meshes. In most situations, MMAs distribute mesh deformation while preserving a good mesh quality. However, invalid meshes are generated when the motion is complex and/or involves multiple bodies. After studying a few MMA limitations, we propose the following approach: use the Inverse Distance Weighting (IDW) function to produce the displacement field, then apply the Geometric Element Transformation Method (GETMe) smoothing algorithms to improve the resulting mesh quality, and use an untangler to revert negative elements. The proposed approach has been proven efficient to adapt meshes for various realistic aerodynamic motions: a symmetric wing that has suffered large tip bending and twisting and the high-lift components of a swept wing that has moved to different flight stages. Finally, the fluid flow problem has been solved on meshes that have moved and they have produced results close to experimental ones. However, for situations where moving boundaries are too close to each other, more improvements need to be made or other approaches should be taken, such as an overset grid method.
NASA Astrophysics Data System (ADS)
Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei
2018-02-01
Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.
NASA Astrophysics Data System (ADS)
Tanaka, Masayuki; Cardoso, Rui; Bahai, Hamid
2018-04-01
In this work, the Moving Particle Semi-implicit (MPS) method is enhanced for multi-resolution problems with different resolutions at different parts of the domain utilising a particle splitting algorithm for the finer resolution and a particle merging algorithm for the coarser resolution. The Least Square MPS (LSMPS) method is used for higher stability and accuracy. Novel boundary conditions are developed for the treatment of wall and pressure boundaries for the Multi-Resolution LSMPS method. A wall is represented by polygons for effective simulations of fluid flows with complex wall geometries and the pressure boundary condition allows arbitrary inflow and outflow, making the method easier to be used in flow simulations of channel flows. By conducting simulations of channel flows and free surface flows, the accuracy of the proposed method was verified.
Improved cancer diagnostics by different image processing techniques on OCT images
NASA Astrophysics Data System (ADS)
Kanawade, Rajesh; Lengenfelder, Benjamin; Marini Menezes, Tassiana; Hohmann, Martin; Kopfinger, Stefan; Hohmann, Tim; Grabiec, Urszula; Klämpfl, Florian; Gonzales Menezes, Jean; Waldner, Maximilian; Schmidt, Michael
2015-07-01
Optical-coherence tomography (OCT) is a promising non-invasive, high-resolution imaging modality which can be used for cancer diagnosis and its therapeutic assessment. However, speckle noise makes detection of cancer boundaries and image segmentation problematic and unreliable. Therefore, to improve the image analysis for a precise cancer border detection, the performance of different image processing algorithms such as mean, median, hybrid median filter and rotational kernel transformation (RKT) for this task is investigated. This is done on OCT images acquired from an ex-vivo human cancerous mucosa and in vitro by using cultivated tumour applied on organotypical hippocampal slice cultures. The preliminary results confirm that the border between the healthy and the cancer lesions can be identified precisely. The obtained results are verified with fluorescence microscopy. This research can improve cancer diagnosis and the detection of borders between healthy and cancerous tissue. Thus, it could also reduce the number of biopsies required during screening endoscopy by providing better guidance to the physician.
NASA Technical Reports Server (NTRS)
Chiavassa, G.; Liandrat, J.
1996-01-01
We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.
Jan A. Henderson; Robin D. Lesher; David H. Peter; Chris D. Ringo
2011-01-01
A gradient-analysis-based model and grid-based map are presented that use the potential vegetation zone as the object of the model. Several new variables are presented that describe the environmental gradients of the landscape at different scales. Boundary algorithms are conceptualized, and then defined, that describe the environmental boundaries between vegetation...
Application of Bayesian Inversion for Multilayer Reservoir Mapping while Drilling Measurements
NASA Astrophysics Data System (ADS)
Wang, J.; Chen, H.; Wang, X.
2017-12-01
Real-time geosteering technology plays a key role in horizontal well development, which keeps the wellbore trajectories within target zones to maximize reservoir contact. The new generation logging while drilling (LWD) resistivity tools have longer spacing and deeper investigation depth, but meanwhile bring a new challenge to inversion of logging data that is formation model not be restricted to few possible numbers of layer such as typical three layers model. If the inappropriate starting models of deterministic and gradient-based methods are adopted may mislead geophysicists in interpretation of subsurface structure. For this purpose, to take advantage of richness of the measurements and deep depth of investigation across multiple formation boundaries, a trans-dimensional Markov chain Monte Carlo(MCMC) inversion algorithm has been developed that combines phase and attenuation measurements at various frequencies and spacings. Unlike conventional gradient-based inversion approaches, MCMC algorithm does not introduce bias from prior information and require any subjective choice of regularization parameter. A synthetic three layers model example demonstrates how the algorithm can be used to image the subsurface using the LWD data. When the tool is far from top boundary, the inversion clearly resolves the boundary position; that is where the boundary histogram shows a large peak. But the measurements cannot resolve the bottom boundary; the large spread between quantiles reflects the uncertainty associated with the bed resolution. As the tool moves closer to the top boundary, the middle layer and bottom layer are resolved and retained models are more similar, the uncertainty associated with these two beds decreases. From the spread observed between models, we can evaluate actual depth of investigation, uncertainty, and sensitivity, which is more useful then just a single best model.
Xia, Wenjun; Mita, Yoshio; Shibata, Tadashi
2016-05-01
Aiming at efficient data condensation and improving accuracy, this paper presents a hardware-friendly template reduction (TR) method for the nearest neighbor (NN) classifiers by introducing the concept of critical boundary vectors. A hardware system is also implemented to demonstrate the feasibility of using an field-programmable gate array (FPGA) to accelerate the proposed method. Initially, k -means centers are used as substitutes for the entire template set. Then, to enhance the classification performance, critical boundary vectors are selected by a novel learning algorithm, which is completed within a single iteration. Moreover, to remove noisy boundary vectors that can mislead the classification in a generalized manner, a global categorization scheme has been explored and applied to the algorithm. The global characterization automatically categorizes each classification problem and rapidly selects the boundary vectors according to the nature of the problem. Finally, only critical boundary vectors and k -means centers are used as the new template set for classification. Experimental results for 24 data sets show that the proposed algorithm can effectively reduce the number of template vectors for classification with a high learning speed. At the same time, it improves the accuracy by an average of 2.17% compared with the traditional NN classifiers and also shows greater accuracy than seven other TR methods. We have shown the feasibility of using a proof-of-concept FPGA system of 256 64-D vectors to accelerate the proposed method on hardware. At a 50-MHz clock frequency, the proposed system achieves a 3.86 times higher learning speed than on a 3.4-GHz PC, while consuming only 1% of the power of that used by the PC.
Advances in Numerical Boundary Conditions for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
1997-01-01
Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.
Three-dimensional electrical impedance tomography: a topology optimization approach.
Mello, Luís Augusto Motta; de Lima, Cícero Ribeiro; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez; Silva, Emílio Carlos Nelli
2008-02-01
Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.
NASA Astrophysics Data System (ADS)
Kilcommons, Liam M.; Redmon, Robert J.; Knipp, Delores J.
2017-08-01
We have developed a method for reprocessing the multidecadal, multispacecraft Defense Meteorological Satellite Program Special Sensor Magnetometer (DMSP SSM) data set and have applied it to 15 spacecraft years of data (DMSP Flight 16-18, 2010-2014). This Level-2 data set improves on other available SSM data sets with recalculated spacecraft locations and magnetic perturbations, artifact signal removal, representations of the observations in geomagnetic coordinates, and in situ auroral boundaries. Spacecraft locations have been recalculated using ground-tracking information. Magnetic perturbations (measured field minus modeled main field) are recomputed. The updated locations ensure the appropriate model field is used. We characterize and remove a slow-varying signal in the magnetic field measurements. This signal is a combination of ring current and measurement artifacts. A final artifact remains after processing: step discontinuities in the baseline caused by activation/deactivation of spacecraft electronics. Using coincident data from the DMSP precipitating electrons and ions instrument (SSJ4/5), we detect the in situ auroral boundaries with an improvement to the Redmon et al. (2010) algorithm. We embed the location of the aurora and an accompanying figure of merit in the Level-2 SSM data product. Finally, we demonstrate the potential of this new data set by estimating field-aligned current (FAC) density using the Minimum Variance Analysis technique. The FAC estimates are then expressed in dynamic auroral boundary coordinates using the SSJ-derived boundaries, demonstrating a dawn-dusk asymmetry in average FAC location relative to the equatorward edge of the aurora. The new SSM data set is now available in several public repositories.
NASA Astrophysics Data System (ADS)
Huang, Shieh-Kung; Loh, Kenneth J.
2015-04-01
The main goal of this study was to develop and validate the performance of a miniature and portable data acquisition (DAQ) system designed for interrogating carbon nanotube (CNT)-based thin films for real-time spatial structural sensing and damage detection. Previous research demonstrated that the electrical properties of CNT-based thin film strain sensors were linearly correlated with applied strains. When coupled with an electrical impedance tomography (EIT) algorithm, the detection and localization of damage was possible. In short, EIT required that the film or "sensing skin" be interrogated along its boundaries. Electrical current was injected across a pair of boundary electrodes, and voltage was simultaneously recorded along the remaining electrode pairs. This was performed multiple times to obtain a large dataset needed for solving the EIT spatial conductivity mapping inverse problem. However, one of the main limitations of this technique was the large amount of time required for data acquisition. In order to facilitate the adoption of this technology and for field implementation purposes, a miniature DAQ that could interrogate these CNT-based sensing skins at high sampling rates was designed and tested. The prototype DAQ featured a Howland current source that could generate stable and controlled direct current. Measurement of boundary electrode voltages and the switching of the input, output, and measurement channels were achieved using multiplexer units. The DAQ prototype was fabricated on a two-layer printed circuit board, and it was designed for integration with a prototype wireless sensing system, which is the next phase of this research.
Duan, Zhugeng; Zhao, Dan; Zeng, Yuan; Zhao, Yujin; Wu, Bingfang; Zhu, Jianjun
2015-01-01
Topography affects forest canopy height retrieval based on airborne Light Detection and Ranging (LiDAR) data a lot. This paper proposes a method for correcting deviations caused by topography based on individual tree crown segmentation. The point cloud of an individual tree was extracted according to crown boundaries of isolated individual trees from digital orthophoto maps (DOMs). Normalized canopy height was calculated by subtracting the elevation of centres of gravity from the elevation of point cloud. First, individual tree crown boundaries are obtained by carrying out segmentation on the DOM. Second, point clouds of the individual trees are extracted based on the boundaries. Third, precise DEM is derived from the point cloud which is classified by a multi-scale curvature classification algorithm. Finally, a height weighted correction method is applied to correct the topological effects. The method is applied to LiDAR data acquired in South China, and its effectiveness is tested using 41 field survey plots. The results show that the terrain impacts the canopy height of individual trees in that the downslope side of the tree trunk is elevated and the upslope side is depressed. This further affects the extraction of the location and crown of individual trees. A strong correlation was detected between the slope gradient and the proportions of returns with height differences more than 0.3, 0.5 and 0.8 m in the total returns, with coefficient of determination R2 of 0.83, 0.76, and 0.60 (n = 41), respectively. PMID:26016907
2D/3D fetal cardiac dataset segmentation using a deformable model.
Dindoyal, Irving; Lambrou, Tryphon; Deng, Jing; Todd-Pokropek, Andrew
2011-07-01
To segment the fetal heart in order to facilitate the 3D assessment of the cardiac function and structure. Ultrasound acquisition typically results in drop-out artifacts of the chamber walls. The authors outline a level set deformable model to automatically delineate the small fetal cardiac chambers. The level set is penalized from growing into an adjacent cardiac compartment using a novel collision detection term. The region based model allows simultaneous segmentation of all four cardiac chambers from a user defined seed point placed in each chamber. The segmented boundaries are automatically penalized from intersecting at walls with signal dropout. Root mean square errors of the perpendicular distances between the algorithm's delineation and manual tracings are within 2 mm which is less than 10% of the length of a typical fetal heart. The ejection fractions were determined from the 3D datasets. We validate the algorithm using a physical phantom and obtain volumes that are comparable to those from physically determined means. The algorithm segments volumes with an error of within 13% as determined using a physical phantom. Our original work in fetal cardiac segmentation compares automatic and manual tracings to a physical phantom and also measures inter observer variation.
Cloud Properties and Radiative Heating Rates for TWP
Comstock, Jennifer
2013-11-07
A cloud properties and radiative heating rates dataset is presented where cloud properties retrieved using lidar and radar observations are input into a radiative transfer model to compute radiative fluxes and heating rates at three ARM sites located in the Tropical Western Pacific (TWP) region. The cloud properties retrieval is a conditional retrieval that applies various retrieval techniques depending on the available data, that is if lidar, radar or both instruments detect cloud. This Combined Remote Sensor Retrieval Algorithm (CombRet) produces vertical profiles of liquid or ice water content (LWC or IWC), droplet effective radius (re), ice crystal generalized effective size (Dge), cloud phase, and cloud boundaries. The algorithm was compared with 3 other independent algorithms to help estimate the uncertainty in the cloud properties, fluxes, and heating rates (Comstock et al. 2013). The dataset is provided at 2 min temporal and 90 m vertical resolution. The current dataset is applied to time periods when the MMCR (Millimeter Cloud Radar) version of the ARSCL (Active Remotely-Sensed Cloud Locations) Value Added Product (VAP) is available. The MERGESONDE VAP is utilized where temperature and humidity profiles are required. Future additions to this dataset will utilize the new KAZR instrument and its associated VAPs.
Graph-based surface reconstruction from stereo pairs using image segmentation
NASA Astrophysics Data System (ADS)
Bleyer, Michael; Gelautz, Margrit
2005-01-01
This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.
Zhu, Haitao; Demachi, Kazuyuki; Sekino, Masaki
2011-09-01
Positive contrast imaging methods produce enhanced signal at large magnetic field gradient in magnetic resonance imaging. Several postprocessing algorithms, such as susceptibility gradient mapping and phase gradient mapping methods, have been applied for positive contrast generation to detect the cells targeted by superparamagnetic iron oxide nanoparticles. In the phase gradient mapping methods, smoothness condition has to be satisfied to keep the phase gradient unwrapped. Moreover, there has been no discussion about the truncation artifact associated with the algorithm of differentiation that is performed in k-space by the multiplication with frequency value. In this work, phase gradient methods are discussed by considering the wrapping problem when the smoothness condition is not satisfied. A region-growing unwrapping algorithm is used in the phase gradient image to solve the problem. In order to reduce the truncation artifact, a cosine function is multiplied in the k-space to eliminate the abrupt change at the boundaries. Simulation, phantom and in vivo experimental results demonstrate that the modified phase gradient mapping methods may produce improved positive contrast effects by reducing truncation or wrapping artifacts. Copyright © 2011 Elsevier Inc. All rights reserved.
Models of formation and some algorithms of hyperspectral image processing
NASA Astrophysics Data System (ADS)
Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.
2014-12-01
Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.
Formation flying design and applications in weak stability boundary regions.
Folta, David
2004-05-01
Weak stability regions serve as superior locations for interferomertric scientific investigations. These regions are often selected to minimize environmental disturbances and maximize observation efficiency. Designs of formations in these regions are becoming ever more challenging as more complex missions are envisioned. The development of algorithms to enable the capability for formation design must be further enabled to incorporate better understanding of weak stability boundary solution space. This development will improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple formation missions in weak stability boundary regions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes both algorithm and software development. The Constellation-X, Maxim, and Stellar Imager missions are examples of the use of improved numeric methods to attain constrained formation geometries and control their dynamical evolution. This paper presents a survey of formation missions in the weak stability boundary regions and a brief description of formation design using numerical and dynamical techniques.
A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method
NASA Astrophysics Data System (ADS)
Chen, Leilei; Zheng, Changjun; Chen, Haibo
2013-09-01
This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.
Parabolized Navier-Stokes solutions of separation and trailing-edge flows
NASA Technical Reports Server (NTRS)
Brown, J. L.
1983-01-01
A robust, iterative solution procedure is presented for the parabolized Navier-Stokes or higher order boundary layer equations as applied to subsonic viscous-inviscid interaction flows. The robustness of the present procedure is due, in part, to an improved algorithmic formulation. The present formulation is based on a reinterpretation of stability requirements for this class of algorithms and requires only second order accurate backward or central differences for all streamwise derivatives. Upstream influence is provided for through the algorithmic formulation and iterative sweeps in x. The primary contribution to robustness, however, is the boundary condition treatment, which imposes global constraints to control the convergence path. Discussed are successful calculations of subsonic, strong viscous-inviscid interactions, including separation. These results are consistent with Navier-Stokes solutions and triple deck theory.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
An algorithm for automating the registration of USDA segment ground data to LANDSAT MSS data
NASA Technical Reports Server (NTRS)
Graham, M. H. (Principal Investigator)
1981-01-01
The algorithm is referred to as the Automatic Segment Matching Algorithm (ASMA). The ASMA uses control points or the annotation record of a P-format LANDSAT compter compatible tape as the initial registration to relate latitude and longitude to LANDSAT rows and columns. It searches a given area of LANDSAT data with a 2x2 sliding window and computes gradient values for bands 5 and 7 to match the segment boundaries. The gradient values are held in memory during the shifting (or matching) process. The reconstructed segment array, containing ones (1's) for boundaries and zeros elsewhere are computer compared to the LANDSAT array and the best match computed. Initial testing of the ASMA indicates that it has good potential for replacing the manual technique.
GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.
Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim
2016-08-01
In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.
NASA Astrophysics Data System (ADS)
Gilani, S. A. N.; Awrangjeb, M.; Lu, G.
2015-03-01
Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.
NASA Astrophysics Data System (ADS)
Kirschner, Matthias; Wesarg, Stefan
2011-03-01
Active Shape Models (ASMs) are a popular family of segmentation algorithms which combine local appearance models for boundary detection with a statistical shape model (SSM). They are especially popular in medical imaging due to their ability for fast and accurate segmentation of anatomical structures even in large and noisy 3D images. A well-known limitation of ASMs is that the shape constraints are over-restrictive, because the segmentations are bounded by the Principal Component Analysis (PCA) subspace learned from the training data. To overcome this limitation, we propose a new energy minimization approach which combines an external image energy with an internal shape model energy. Our shape energy uses the Distance From Feature Space (DFFS) concept to allow deviations from the PCA subspace in a theoretically sound and computationally fast way. In contrast to previous approaches, our model does not rely on post-processing with constrained free-form deformation or additional complex local energy models. In addition to the energy minimization approach, we propose a new method for liver detection, a new method for initializing an SSM and an improved k-Nearest Neighbour (kNN)-classifier for boundary detection. Our ASM is evaluated with leave-one-out tests on a data set with 34 tomographic CT scans of the liver and is compared to an ASM with standard shape constraints. The quantitative results of our experiments show that we achieve higher segmentation accuracy with our energy minimization approach than with standard shape constraints.nym
Interaction between aerosol and the planetary boundary layer depth at sites in the US and China
NASA Astrophysics Data System (ADS)
Sawyer, V. R.
2015-12-01
The depth of the planetary boundary layer (PBL) defines a changing volume into which pollutants from the surface can disperse, which affects weather, surface air quality and radiative forcing in the lower troposphere. Model simulations have also shown that aerosol within the PBL heats the layer at the expense of the surface, changing the stability profile and therefore also the development of the PBL itself: aerosol radiative forcing within the PBL suppresses surface convection and causes shallower PBLs. However, the effect has been difficult to detect in observations. The most intensive radiosonde measurements have a temporal resolution too coarse to detect the full diurnal variability of the PBL, but remote sensing such as lidar can fill in the gaps. Using a method that combines two common PBL detection algorithms (wavelet covariance and iterative curve-fitting) PBL depth retrievals from micropulse lidar (MPL) at the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site are compared to MPL-derived PBL depths from a multiyear lidar deployment at the Hefei Radiation Observatory (HeRO). With aerosol optical depth (AOD) measurements from both sites, it can be shown that a weak inverse relationship exists between AOD and daytime PBL depth. This relationship is stronger at the more polluted HeRO site than at SGP. Figure: Mean daily AOD vs. mean daily PBL depth, with the Nadaraya-Watson estimator overlaid on the kernel density estimate. Left, SGP; right, HeRO.
A Novel Optical/digital Processing System for Pattern Recognition
NASA Technical Reports Server (NTRS)
Boone, Bradley G.; Shukla, Oodaye B.
1993-01-01
This paper describes two processing algorithms that can be implemented optically: the Radon transform and angular correlation. These two algorithms can be combined in one optical processor to extract all the basic geometric and amplitude features from objects embedded in video imagery. We show that the internal amplitude structure of objects is recovered by the Radon transform, which is a well-known result, but, in addition, we show simulation results that calculate angular correlation, a simple but unique algorithm that extracts object boundaries from suitably threshold images from which length, width, area, aspect ratio, and orientation can be derived. In addition to circumventing scale and rotation distortions, these simulations indicate that the features derived from the angular correlation algorithm are relatively insensitive to tracking shifts and image noise. Some optical architecture concepts, including one based on micro-optical lenslet arrays, have been developed to implement these algorithms. Simulation test and evaluation using simple synthetic object data will be described, including results of a study that uses object boundaries (derivable from angular correlation) to classify simple objects using a neural network.
Attributed relational graphs for cell nucleus segmentation in fluorescence microscopy images.
Arslan, Salim; Ersahin, Tulin; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem
2013-06-01
More rapid and accurate high-throughput screening in molecular cellular biology research has become possible with the development of automated microscopy imaging, for which cell nucleus segmentation commonly constitutes the core step. Although several promising methods exist for segmenting the nuclei of monolayer isolated and less-confluent cells, it still remains an open problem to segment the nuclei of more-confluent cells, which tend to grow in overlayers. To address this problem, we propose a new model-based nucleus segmentation algorithm. This algorithm models how a human locates a nucleus by identifying the nucleus boundaries and piecing them together. In this algorithm, we define four types of primitives to represent nucleus boundaries at different orientations and construct an attributed relational graph on the primitives to represent their spatial relations. Then, we reduce the nucleus identification problem to finding predefined structural patterns in the constructed graph and also use the primitives in region growing to delineate the nucleus borders. Working with fluorescence microscopy images, our experiments demonstrate that the proposed algorithm identifies nuclei better than previous nucleus segmentation algorithms.
Automatic rock detection for in situ spectroscopy applications on Mars
NASA Astrophysics Data System (ADS)
Mahapatra, Pooja; Foing, Bernard H.
A novel algorithm for rock detection has been developed for effectively utilising Mars rovers, and enabling autonomous selection of target rocks that require close-contact spectroscopic measurements. The algorithm demarcates small rocks in terrain images as seen by cameras on a Mars rover during traverse. This information may be used by the rover for selection of geologically relevant sample rocks, and (in conjunction with a rangefinder) to pick up target samples using a robotic arm for automatic in situ determination of rock composition and mineralogy using, for example, a Raman spectrometer. Determining rock samples within the region that are of specific interest without physically approaching them significantly reduces time, power and risk. Input images in colour are converted to greyscale for intensity analysis. Bilateral filtering is used for texture removal while preserving rock boundaries. Unsharp masking is used for contrast enhance-ment. Sharp contrasts in intensities are detected using Canny edge detection, with thresholds that are calculated from the image obtained after contrast-limited adaptive histogram equalisation of the unsharp masked image. Scale-space representations are then generated by convolving this image with a Gaussian kernel. A scale-invariant blob detector (Laplacian of the Gaussian, LoG) detects blobs independently of their sizes, and therefore requires a multi-scale approach with automatic scale se-lection. The scale-space blob detector consists of convolution of the Canny edge-detected image with a scale-normalised LoG at several scales, and finding the maxima of squared LoG response in scale-space. After the extraction of local intensity extrema, the intensity profiles along rays going out of the local extremum are investigated. An ellipse is fitted to the region determined by significant changes in the intensity profiles. The fitted ellipses are overlaid on the original Mars terrain image for a visual estimation of the rock detection accuracy, and the number of ellipses are counted. Since geometry and illumination have the least effect on small rocks, the proposed algorithm is effective in detecting small rocks (or bigger rocks at larger distances from the camera) that consist of a small fraction of image pixels. Acknowledgements: The first author would like to express her gratitude to the European Space Agency (ESA/ESTEC) and the International Lunar Exploration Working Group (ILEWG) for their support of this work.
NASA Astrophysics Data System (ADS)
Wu, Yu-Xia; Zhang, Xi; Xu, Xiao-Pan; Liu, Yang; Zhang, Guo-Peng; Li, Bao-Juan; Chen, Hui-Jun; Lu, Hong-Bing
2017-02-01
Ischemic stroke has great correlation with carotid atherosclerosis and is mostly caused by vulnerable plaques. It's particularly important to analysis the components of plaques for the detection of vulnerable plaques. Recently plaque analysis based on multi-contrast magnetic resonance imaging has attracted great attention. Though multi-contrast MR imaging has potentials in enhanced demonstration of carotid wall, its performance is hampered by the misalignment of different imaging sequences. In this study, a coarse-to-fine registration strategy based on cross-sectional images and wall boundaries is proposed to solve the problem. It includes two steps: a rigid step using the iterative closest points to register the centerlines of carotid artery extracted from multi-contrast MR images, and a non-rigid step using the thin plate spline to register the lumen boundaries of carotid artery. In the rigid step, the centerline was extracted by tracking the crosssectional images along the vessel direction calculated by Hessian matrix. In the non-rigid step, a shape context descriptor is introduced to find corresponding points of two similar boundaries. In addition, the deterministic annealing technique is used to find a globally optimized solution. The proposed strategy was evaluated by newly developed three-dimensional, fast and high resolution multi-contrast black blood MR imaging. Quantitative validation indicated that after registration, the overlap of two boundaries from different sequences is 95%, and their mean surface distance is 0.12 mm. In conclusion, the proposed algorithm has improved the accuracy of registration effectively for further component analysis of carotid plaques.
NASA Technical Reports Server (NTRS)
Van Dalsem, W. R.; Steger, J. L.
1983-01-01
A new, fast, direct-inverse, finite-difference boundary-layer code has been developed and coupled with a full-potential transonic airfoil analysis code via new inviscid-viscous interaction algorithms. The resulting code has been used to calculate transonic separated flows. The results are in good agreement with Navier-Stokes calculations and experimental data. Solutions are obtained in considerably less computer time than Navier-Stokes solutions of equal resolution. Because efficient inviscid and viscous algorithms are used, it is expected this code will also compare favorably with other codes of its type as they become available.
Yu, Kai; Shi, Fei; Gao, Enting; Zhu, Weifang; Chen, Haoyu; Chen, Xinjian
2018-01-01
Optic nerve head (ONH) is a crucial region for glaucoma detection and tracking based on spectral domain optical coherence tomography (SD-OCT) images. In this region, the existence of a “hole” structure makes retinal layer segmentation and analysis very challenging. To improve retinal layer segmentation, we propose a 3D method for ONH centered SD-OCT image segmentation, which is based on a modified graph search algorithm with a shared-hole and locally adaptive constraints. With the proposed method, both the optic disc boundary and nine retinal surfaces can be accurately segmented in SD-OCT images. An overall mean unsigned border positioning error of 7.27 ± 5.40 µm was achieved for layer segmentation, and a mean Dice coefficient of 0.925 ± 0.03 was achieved for optic disc region detection. PMID:29541497
Calculation of grain boundary normals directly from 3D microstructure images
Lieberman, E. J.; Rollett, A. D.; Lebensohn, R. A.; ...
2015-03-11
The determination of grain boundary normals is an integral part of the characterization of grain boundaries in polycrystalline materials. These normal vectors are difficult to quantify due to the discretized nature of available microstructure characterization techniques. The most common method to determine grain boundary normals is by generating a surface mesh from an image of the microstructure, but this process can be slow, and is subject to smoothing issues. A new technique is proposed, utilizing first order Cartesian moments of binary indicator functions, to determine grain boundary normals directly from a voxelized microstructure image. In order to validate the accuracymore » of this technique, the surface normals obtained by the proposed method are compared to those generated by a surface meshing algorithm. Specifically, the local divergence between the surface normals obtained by different variants of the proposed technique and those generated from a surface mesh of a synthetic microstructure constructed using a marching cubes algorithm followed by Laplacian smoothing is quantified. Next, surface normals obtained with the proposed method from a measured 3D microstructure image of a Ni polycrystal are used to generate grain boundary character distributions (GBCD) for Σ3 and Σ9 boundaries, and compared to the GBCD generated using a surface mesh obtained from the same image. Finally, the results show that the proposed technique is an efficient and accurate method to determine voxelized fields of grain boundary normals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kempka, S.N.; Strickland, J.H.; Glass, M.W.
1995-04-01
formulation to satisfy velocity boundary conditions for the vorticity form of the incompressible, viscous fluid momentum equations is presented. The tangential and normal components of the velocity boundary condition are satisfied simultaneously by creating vorticity adjacent to boundaries. The newly created vorticity is determined using a kinematical formulation which is a generalization of Helmholtz` decomposition of a vector field. Though it has not been generally recognized, these formulations resolve the over-specification issue associated with creating voracity to satisfy velocity boundary conditions. The generalized decomposition has not been widely used, apparently due to a lack of a useful physical interpretation. Anmore » analysis is presented which shows that the generalized decomposition has a relatively simple physical interpretation which facilitates its numerical implementation. The implementation of the generalized decomposition is discussed in detail. As an example the flow in a two-dimensional lid-driven cavity is simulated. The solution technique is based on a Lagrangian transport algorithm in the hydrocode ALEGRA. ALEGRA`s Lagrangian transport algorithm has been modified to solve the vorticity transport equation and the generalized decomposition, thus providing a new, accurate method to simulate incompressible flows. This numerical implementation and the new boundary condition formulation allow vorticity-based formulations to be used in a wider range of engineering problems.« less
Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation
NASA Technical Reports Server (NTRS)
Mook, D. J.; Lew, Jiann-Shiun
1991-01-01
Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.
The bilinear complexity and practical algorithms for matrix multiplication
NASA Astrophysics Data System (ADS)
Smirnov, A. V.
2013-12-01
A method for deriving bilinear algorithms for matrix multiplication is proposed. New estimates for the bilinear complexity of a number of problems of the exact and approximate multiplication of rectangular matrices are obtained. In particular, the estimate for the boundary rank of multiplying 3 × 3 matrices is improved and a practical algorithm for the exact multiplication of square n × n matrices is proposed. The asymptotic arithmetic complexity of this algorithm is O( n 2.7743).
Classification with an edge: Improving semantic image segmentation with boundary detection
NASA Astrophysics Data System (ADS)
Marmanis, D.; Schindler, K.; Wegner, J. D.; Galliani, S.; Datcu, M.; Stilla, U.
2018-01-01
We present an end-to-end trainable deep convolutional neural network (DCNN) for semantic segmentation with built-in awareness of semantically meaningful boundaries. Semantic segmentation is a fundamental remote sensing task, and most state-of-the-art methods rely on DCNNs as their workhorse. A major reason for their success is that deep networks learn to accumulate contextual information over very large receptive fields. However, this success comes at a cost, since the associated loss of effective spatial resolution washes out high-frequency details and leads to blurry object boundaries. Here, we propose to counter this effect by combining semantic segmentation with semantically informed edge detection, thus making class boundaries explicit in the model. First, we construct a comparatively simple, memory-efficient model by adding boundary detection to the SEGNET encoder-decoder architecture. Second, we also include boundary detection in FCN-type models and set up a high-end classifier ensemble. We show that boundary detection significantly improves semantic segmentation with CNNs in an end-to-end training scheme. Our best model achieves >90% overall accuracy on the ISPRS Vaihingen benchmark.
Geometric identification and damage detection of structural elements by terrestrial laser scanner
NASA Astrophysics Data System (ADS)
Hou, Tsung-Chin; Liu, Yu-Wei; Su, Yu-Min
2016-04-01
In recent years, three-dimensional (3D) terrestrial laser scanning technologies with higher precision and higher capability are developing rapidly. The growing maturity of laser scanning has gradually approached the required precision as those have been provided by traditional structural monitoring technologies. Together with widely available fast computation for massive point cloud data processing, 3D laser scanning can serve as an efficient structural monitoring alternative for civil engineering communities. Currently most research efforts have focused on integrating/calculating the measured multi-station point cloud data, as well as modeling/establishing the 3D meshes of the scanned objects. Very little attention has been spent on extracting the information related to health conditions and mechanical states of structures. In this study, an automated numerical approach that integrates various existing algorithms for geometric identification and damage detection of structural elements were established. Specifically, adaptive meshes were employed for classifying the point cloud data of the structural elements, and detecting the associated damages from the calculated eigenvalues in each area of the structural element. Furthermore, kd-tree was used to enhance the searching efficiency of plane fitting which were later used for identifying the boundaries of structural elements. The results of geometric identification were compared with M3C2 algorithm provided by CloudCompare, as well as validated by LVDT measurements of full-scale reinforced concrete beams tested in laboratory. It shows that 3D laser scanning, through the established processing approaches of the point cloud data, can offer a rapid, nondestructive, remote, and accurate solution for geometric identification and damage detection of structural elements.
A tire contact solution technique
NASA Technical Reports Server (NTRS)
Tielking, J. T.
1983-01-01
An efficient method for calculating the contact boundary and interfacial pressure distribution was developed. This solution technique utilizes the discrete Fourier transform to establish an influence coefficient matrix for the portion of the pressurized tire surface that may be in the contact region. This matrix is used in a linear algebra algorithm to determine the contact boundary and the array of forces within the boundary that are necessary to hold the tire in equilibrium against a specified contact surface. The algorithm also determines the normal and tangential displacements of those points on the tire surface that are included in the influence coefficient matrix. Displacements within and outside the contact region are calculated. The solution technique is implemented with a finite-element tire model that is based on orthotropic, nonlinear shell of revolution elements which can respond to nonaxisymmetric loads. A sample contact solution is presented.
Linear feature detection algorithm for astronomical surveys - I. Algorithm description
NASA Astrophysics Data System (ADS)
Bektešević, Dino; Vinković, Dejan
2017-11-01
Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.
Mathews, D M; Cirullo, P M; Struys, M M R F; De Smet, T; Malik, R J; Chang, C L; Neuman, G G
2007-06-01
Facial electromyography (FEMG) may have utility in the assessment of nociception during surgery. The difference between state entropy (SE) and response entropy (RE) is an indirect measure of FEMG. This study assesses an automated algorithm for remifentanil administration that is based on maintaining an entropy difference (ED) that is less than an upper boundary condition and greater than a lower boundary condition. The algorithm was constructed with a development set (n = 40), and then automated and studied with a validation set (n = 20) of patients undergoing anterior cruciate ligament repair. The percentage of time that the ED was maintained between the two boundary conditions was determined. Remifentanil and propofol predicted effect-site concentrations (Ce) were determined at surgical milestones and, after drug discontinuation, the time to response to verbal stimulation and orientation was measured. The median (25th-75th percentile) per cent of time that the ED was recorded between the boundary conditions was 99.3% (98.1-99.8%). Predicted propofol (microg ml(-1)) and remifentanil (ng ml(-1)) Ce (sd), respectively, were 3.5 and 4.0 at induction, 1.9 (0.8) and 7.2 (3.7) at the end of surgery, and 1.1 (0.5) and 3.2 (2.2) at eye opening. The median time to eye opening and orientation was 3.8 and 6.8 min, respectively. This feasibility study supports the concept that remifentanil may be delivered using an algorithm that maintains the difference between SE and RE between the upper and lower boundary condition.
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.
2015-05-01
This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.
The "side" matters: how configurality is reflected in completion.
Kogo, Naoki; Wagemans, Johan
2013-01-01
The perception of figure-ground organization is a highly context-sensitive phenomenon. Accumulating evidence suggests that the so-called completion phenomenon is tightly linked to this figure-ground organization. While many computational models have applied borderline completion algorithms based on the detection of boundary alignments, we point out the problems of this approach. We hypothesize that completion is a result of computing the figure-ground organization. Specifically, the global interactions in the neural network activate the "border-ownership" sensitive neurons at the location where no luminance contrast is given and this activation corresponds to the perception of illusory contours. The implications of this result to the general property of emerging Gestalt percepts are discussed.
Intelligent screening of electrofusion-polyethylene joints based on a thermal NDT method
NASA Astrophysics Data System (ADS)
Doaei, Marjan; Tavallali, M. Sadegh
2018-05-01
The combinations of infrared thermal images and artificial intelligence methods have opened new avenues for pushing the boundaries of available testing methods. Hence, in the current study, a novel thermal non-destructive testing method for polyethylene electrofusion joints was combined with k-means clustering algorithms as an intelligent screening tool. The experiments focused on ovality of pipes in the coupler, as well as misalignment of pipes-couplers in 25 mm diameter joints. The temperature responses of each joint to an internal heat pulse were recorded by an IR thermal camera, and further processed to identify the faulty joints. The results represented clustering accuracy of 92%, as well as more than 90% abnormality detection capabilities.
Anderson, I M; Bezdek, J C
1984-01-01
This paper introduces a new theory for the tangential deflection and curvature of plane discrete curves. Our theory applies to discrete data in either rectangular boundary coordinate or chain coded formats: its rationale is drawn from the statistical and geometric properties associated with the eigenvalue-eigenvector structure of sample covariance matrices. Specifically, we prove that the nonzero entry of the commutator of a piar of scatter matrices constructed from discrete arcs is related to the angle between their eigenspaces. And further, we show that this entry is-in certain limiting cases-also proportional to the analytical curvature of the plane curve from which the discrete data are drawn. These results lend a sound theoretical basis to the notions of discrete curvature and tangential deflection; and moreover, they provide a means for computationally efficient implementation of algorithms which use these ideas in various image processing contexts. As a concrete example, we develop the commutator vertex detection (CVD) algorithm, which identifies the location of vertices in shape data based on excessive cummulative tangential deflection; and we compare its performance to several well established corner detectors that utilize the alternative strategy of finding (approximate) curvature extrema.
LobeFinder: A Convex Hull-Based Method for Quantitative Boundary Analyses of Lobed Plant Cells1[OPEN
Wu, Tzu-Ching; Belteton, Samuel A.; Szymanski, Daniel B.; Umulis, David M.
2016-01-01
Dicot leaves are composed of a heterogeneous mosaic of jigsaw puzzle piece-shaped pavement cells that vary greatly in size and the complexity of their shape. Given the importance of the epidermis and this particular cell type for leaf expansion, there is a strong need to understand how pavement cells morph from a simple polyhedral shape into highly lobed and interdigitated cells. At present, it is still unclear how and when the patterns of lobing are initiated in pavement cells, and one major technological bottleneck to addressing the problem is the lack of a robust and objective methodology to identify and track lobing events during the transition from simple cell geometry to lobed cells. We developed a convex hull-based algorithm termed LobeFinder to identify lobes, quantify geometric properties, and create a useful graphical output of cell coordinates for further analysis. The algorithm was validated against manually curated images of pavement cells of widely varying sizes and shapes. The ability to objectively count and detect new lobe initiation events provides an improved quantitative framework to analyze mutant phenotypes, detect symmetry-breaking events in time-lapse image data, and quantify the time-dependent correlation between cell shape change and intracellular factors that may play a role in the morphogenesis process. PMID:27288363
Finding intonational boundaries using acoustic cues related to the voice source
NASA Astrophysics Data System (ADS)
Choi, Jeung-Yoon; Hasegawa-Johnson, Mark; Cole, Jennifer
2005-10-01
Acoustic cues related to the voice source, including harmonic structure and spectral tilt, were examined for relevance to prosodic boundary detection. The measurements considered here comprise five categories: duration, pitch, harmonic structure, spectral tilt, and amplitude. Distributions of the measurements and statistical analysis show that the measurements may be used to differentiate between prosodic categories. Detection experiments on the Boston University Radio Speech Corpus show equal error detection rates around 70% for accent and boundary detection, using only the acoustic measurements described, without any lexical or syntactic information. Further investigation of the detection results shows that duration and amplitude measurements, and, to a lesser degree, pitch measurements, are useful for detecting accents, while all voice source measurements except pitch measurements are useful for boundary detection.
An efficient parallel termination detection algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, A. H.; Crivelli, S.; Jessup, E. R.
2004-05-27
Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Ofmore » these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.« less
AN FDTD ALGORITHM WITH PERFECTLY MATCHED LAYERS FOR CONDUCTIVE MEDIA. (R825225)
We extend Berenger's perfectly matched layers (PML) to conductive media. A finite-difference-time-domain (FDTD) algorithm with PML as an absorbing boundary condition is developed for solutions of Maxwell's equations in inhomogeneous, conductive media. For a perfectly matched laye...
A PML-FDTD ALGORITHM FOR SIMULATING PLASMA-COVERED CAVITY-BACKED SLOT ANTENNAS. (R825225)
A three-dimensional frequency-dependent finite-difference time-domain (FDTD) algorithm with perfectly matched layer (PML) absorbing boundary condition (ABC) and recursive convolution approaches is developed to model plasma-covered open-ended waveguide or cavity-backed slot antenn...
The Linear Bicharacteristic Scheme for Computational Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Chan, Siew-Loong
2000-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on electromagnetic wave propagation problems. This paper extends the Linear Bicharacteristic Scheme for computational electromagnetics to treat lossy dielectric and magnetic materials and perfect electrical conductors. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media, and treatment of perfect electrical conductors (PECs) are shown to follow directly in the limit of high conductivity. Heterogeneous media are treated through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for one-dimensional model problems on both uniform and nonuniform grids, and the FDTD algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has approximately one-third the phase velocity error. The LBS is also more accurate on nonuniform grids.
Reducing noise component on medical images
NASA Astrophysics Data System (ADS)
Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana
2018-04-01
Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.
A Two-Dimensional Linear Bicharacteristic Scheme for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.
2002-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on one-dimensional electromagnetic wave propagation problems. This memorandum extends the Linear Bicharacteristic Scheme for computational electromagnetics to model lossy dielectric and magnetic materials and perfect electrical conductors in two dimensions. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media and for perfect electrical conductors. Both the Transverse Electric and Transverse Magnetic polarizations are considered. Computational requirements and a Fourier analysis are also discussed. Heterogeneous media are modeled through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for two-dimensional model problems on uniform grids, and the Finite Difference Time Domain (FDTD) algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the two-dimensional explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has less phase velocity error.
Numerical algorithms for computations of feedback laws arising in control of flexible systems
NASA Technical Reports Server (NTRS)
Lasiecka, Irena
1989-01-01
Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.
Retinal layer segmentation of macular OCT images using boundary classification
Lang, Andrew; Carass, Aaron; Hauser, Matthew; Sotirchos, Elias S.; Calabresi, Peter A.; Ying, Howard S.; Prince, Jerry L.
2013-01-01
Optical coherence tomography (OCT) has proven to be an essential imaging modality for ophthalmology and is proving to be very important in neurology. OCT enables high resolution imaging of the retina, both at the optic nerve head and the macula. Macular retinal layer thicknesses provide useful diagnostic information and have been shown to correlate well with measures of disease severity in several diseases. Since manual segmentation of these layers is time consuming and prone to bias, automatic segmentation methods are critical for full utilization of this technology. In this work, we build a random forest classifier to segment eight retinal layers in macular cube images acquired by OCT. The random forest classifier learns the boundary pixels between layers, producing an accurate probability map for each boundary, which is then processed to finalize the boundaries. Using this algorithm, we can accurately segment the entire retina contained in the macular cube to an accuracy of at least 4.3 microns for any of the nine boundaries. Experiments were carried out on both healthy and multiple sclerosis subjects, with no difference in the accuracy of our algorithm found between the groups. PMID:23847738
Radar Detection of Marine Mammals
2011-09-30
BFT-BPT algorithm for use with our radar data. This track - before - detect algorithm had been effective in enhancing small but persistent signatures in...will be possible with the detect before track algorithm. 4 We next evaluated the track before detect algorithm, the BFT-BPT, on the CEDAR data
An opposite view data replacement approach for reducing artifacts due to metallic dental objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yazdi, Mehran; Lari, Meghdad Asadi; Bernier, Gaston
Purpose: To present a conceptually new method for metal artifact reduction (MAR) that can be used on patients with multiple objects within the scan plane that are also of small sized along the longitudinal (scanning) direction, such as dental fillings. Methods: The proposed algorithm, named opposite view replacement, achieves MAR by first detecting the projection data affected by metal objects and then replacing the affected projections by the corresponding opposite view projections, which are not affected by metal objects. The authors also applied a fading process to avoid producing any discontinuities in the boundary of the affected projection areas inmore » the sinogram. A skull phantom with and without a variety of dental metal inserts was made to extract the performance metric of the algorithm. A head and neck case, typical of IMRT planning, was also tested. Results: The reconstructed CT images based on this new replacement scheme show a significant improvement in image quality for patients with metallic dental objects compared to the MAR algorithms based on the interpolation scheme. For the phantom, the authors showed that the artifact reduction algorithm can efficiently recover the CT numbers in the area next to the metallic objects. Conclusions: The authors presented a new and efficient method for artifact reduction due to multiple small metallic objects. The obtained results from phantoms and clinical cases fully validate the proposed approach.« less
A random forest algorithm for nowcasting of intense precipitation events
NASA Astrophysics Data System (ADS)
Das, Saurabh; Chakraborty, Rohit; Maitra, Animesh
2017-09-01
Automatic nowcasting of convective initiation and thunderstorms has potential applications in several sectors including aviation planning and disaster management. In this paper, random forest based machine learning algorithm is tested for nowcasting of convective rain with a ground based radiometer. Brightness temperatures measured at 14 frequencies (7 frequencies in 22-31 GHz band and 7 frequencies in 51-58 GHz bands) are utilized as the inputs of the model. The lower frequency band is associated to the water vapor absorption whereas the upper frequency band relates to the oxygen absorption and hence, provide information on the temperature and humidity of the atmosphere. Synthetic minority over-sampling technique is used to balance the data set and 10-fold cross validation is used to assess the performance of the model. Results indicate that random forest algorithm with fixed alarm generation time of 30 min and 60 min performs quite well (probability of detection of all types of weather condition ∼90%) with low false alarms. It is, however, also observed that reducing the alarm generation time improves the threat score significantly and also decreases false alarms. The proposed model is found to be very sensitive to the boundary layer instability as indicated by the variable importance measure. The study shows the suitability of a random forest algorithm for nowcasting application utilizing a large number of input parameters from diverse sources and can be utilized in other forecasting problems.
Wireless sensor networks for heritage object deformation detection and tracking algorithm.
Xie, Zhijun; Huang, Guangyan; Zarei, Roozbeh; He, Jing; Zhang, Yanchun; Ye, Hongwu
2014-10-31
Deformation is the direct cause of heritage object collapse. It is significant to monitor and signal the early warnings of the deformation of heritage objects. However, traditional heritage object monitoring methods only roughly monitor a simple-shaped heritage object as a whole, but cannot monitor complicated heritage objects, which may have a large number of surfaces inside and outside. Wireless sensor networks, comprising many small-sized, low-cost, low-power intelligent sensor nodes, are more useful to detect the deformation of every small part of the heritage objects. Wireless sensor networks need an effective mechanism to reduce both the communication costs and energy consumption in order to monitor the heritage objects in real time. In this paper, we provide an effective heritage object deformation detection and tracking method using wireless sensor networks (EffeHDDT). In EffeHDDT, we discover a connected core set of sensor nodes to reduce the communication cost for transmitting and collecting the data of the sensor networks. Particularly, we propose a heritage object boundary detecting and tracking mechanism. Both theoretical analysis and experimental results demonstrate that our EffeHDDT method outperforms the existing methods in terms of network traffic and the precision of the deformation detection.
Wireless Sensor Networks for Heritage Object Deformation Detection and Tracking Algorithm
Xie, Zhijun; Huang, Guangyan; Zarei, Roozbeh; He, Jing; Zhang, Yanchun; Ye, Hongwu
2014-01-01
Deformation is the direct cause of heritage object collapse. It is significant to monitor and signal the early warnings of the deformation of heritage objects. However, traditional heritage object monitoring methods only roughly monitor a simple-shaped heritage object as a whole, but cannot monitor complicated heritage objects, which may have a large number of surfaces inside and outside. Wireless sensor networks, comprising many small-sized, low-cost, low-power intelligent sensor nodes, are more useful to detect the deformation of every small part of the heritage objects. Wireless sensor networks need an effective mechanism to reduce both the communication costs and energy consumption in order to monitor the heritage objects in real time. In this paper, we provide an effective heritage object deformation detection and tracking method using wireless sensor networks (EffeHDDT). In EffeHDDT, we discover a connected core set of sensor nodes to reduce the communication cost for transmitting and collecting the data of the sensor networks. Particularly, we propose a heritage object boundary detecting and tracking mechanism. Both theoretical analysis and experimental results demonstrate that our EffeHDDT method outperforms the existing methods in terms of network traffic and the precision of the deformation detection. PMID:25365458
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Halig, Luma V.; Fei, Baowei
2013-03-01
An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%+/-2.3% and 83.6+/-7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
Global Discrete Artificial Boundary Conditions for Time-Dependent Wave Propagation
NASA Technical Reports Server (NTRS)
Ryabenkii, V. S.; Tsynkov, S. V.; Turchaninov, V. I.; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We construct global artificial boundary conditions (ABCs) for the numerical simulation of wave processes on unbounded domains using a special non-deteriorating algorithm that has been developed previously for the long-term computation of wave-radiation solutions. The ABCs are obtained directly for the discrete formulation of the problem; in so doing, neither a rational approximation of 'non-reflecting kernels,' nor discretization of the continuous boundary conditions is required. The extent of temporal nonlocality of the new ABCs appears fixed and limited; in addition, the ABCs can handle artificial boundaries of irregular shape on regular grids with no fitting/adaptation needed and no accuracy loss induced. The non-deteriorating algorithm, which is the core of the new ABCs is inherently three-dimensional, it guarantees temporally uniform grid convergence of the solution driven by a continuously operating source on arbitrarily long time intervals, and provides unimprovable linear computational complexity with respect to the grid dimension. The algorithm is based on the presence of lacunae, i.e., aft fronts of the waves, in wave-type solutions in odd-dimension spaces, It can, in fact, be built as a modification on top of any consistent and stable finite-difference scheme, making its grid convergence uniform in time and at the same time keeping the rate of convergence the same as that of the non-modified scheme. In the paper, we delineate the construction of the global lacunae-based ABCs in the framework of a discretized wave equation. The ABCs are obtained for the most general formulation of the problem that involves radiation of waves by moving sources (e.g., radiation of acoustic waves by a maneuvering aircraft). We also present systematic numerical results that corroborate the theoretical design properties of the ABCs' algorithm.
Global Discrete Artificial Boundary Conditions for Time-Dependent Wave Propagation
NASA Astrophysics Data System (ADS)
Ryaben'kii, V. S.; Tsynkov, S. V.; Turchaninov, V. I.
2001-12-01
We construct global artificial boundary conditions (ABCs) for the numerical simulation of wave processes on unbounded domains using a special nondeteriorating algorithm that has been developed previously for the long-term computation of wave-radiation solutions. The ABCs are obtained directly for the discrete formulation of the problem; in so doing, neither a rational approximation of “nonreflecting kernels” nor discretization of the continuous boundary conditions is required. The extent of temporal nonlocality of the new ABCs appears fixed and limited; in addition, the ABCs can handle artificial boundaries of irregular shape on regular grids with no fitting/adaptation needed and no accuracy loss induced. The nondeteriorating algorithm, which is the core of the new ABCs, is inherently three-dimensional, it guarantees temporally uniform grid convergence of the solution driven by a continuously operating source on arbitrarily long time intervals and provides unimprovable linear computational complexity with respect to the grid dimension. The algorithm is based on the presence of lacunae, i.e., aft fronts of the waves, in wave-type solutions in odd-dimensional spaces. It can, in fact, be built as a modification on top of any consistent and stable finite-difference scheme, making its grid convergence uniform in time and at the same time keeping the rate of convergence the same as that of the unmodified scheme. In this paper, we delineate the construction of the global lacunae-based ABCs in the framework of a discretized wave equation. The ABCs are obtained for the most general formulation of the problem that involves radiation of waves by moving sources (e.g., radiation of acoustic waves by a maneuvering aircraft). We also present systematic numerical results that corroborate the theoretical design properties of the ABC algorithm.
NASA Technical Reports Server (NTRS)
Vlahopoulos, Nickolas; Lyle, Karen H.; Burley, Casey L.
1998-01-01
An algorithm for generating appropriate velocity boundary conditions for an acoustic boundary element analysis from the kinematics of an operating propeller is presented. It constitutes the initial phase of Integrating sophisticated rotorcraft models into a conventional boundary element analysis. Currently, the pressure field is computed by a linear approximation. An initial validation of the developed process was performed by comparing numerical results to test data for the external acoustic pressure on the surface of a tilt-rotor aircraft for one flight condition.
San, Phyo Phyo; Ling, Sai Ho; Nuryani; Nguyen, Hung
2014-08-01
This paper focuses on the hybridization technology using rough sets concepts and neural computing for decision and classification purposes. Based on the rough set properties, the lower region and boundary region are defined to partition the input signal to a consistent (predictable) part and an inconsistent (random) part. In this way, the neural network is designed to deal only with the boundary region, which mainly consists of an inconsistent part of applied input signal causing inaccurate modeling of the data set. Owing to different characteristics of neural network (NN) applications, the same structure of conventional NN might not give the optimal solution. Based on the knowledge of application in this paper, a block-based neural network (BBNN) is selected as a suitable classifier due to its ability to evolve internal structures and adaptability in dynamic environments. This architecture will systematically incorporate the characteristics of application to the structure of hybrid rough-block-based neural network (R-BBNN). A global training algorithm, hybrid particle swarm optimization with wavelet mutation is introduced for parameter optimization of proposed R-BBNN. The performance of the proposed R-BBNN algorithm was evaluated by an application to the field of medical diagnosis using real hypoglycemia episodes in patients with Type 1 diabetes mellitus. The performance of the proposed hybrid system has been compared with some of the existing neural networks. The comparison results indicated that the proposed method has improved classification performance and results in early convergence of the network.
Ratcliff, Roger; Starns, Jeffrey J.
2014-01-01
Confidence in judgments is a fundamental aspect of decision making, and tasks that collect confidence judgments are an instantiation of multiple-choice decision making. We present a model for confidence judgments in recognition memory tasks that uses a multiple-choice diffusion decision process with separate accumulators of evidence for the different confidence choices. The accumulator that first reaches its decision boundary determines which choice is made. Five algorithms for accumulating evidence were compared, and one of them produced proportions of responses for each of the choices and full response time distributions for each choice that closely matched empirical data. With this algorithm, an increase in the evidence in one accumulator is accompanied by a decrease in the others so that the total amount of evidence in the system is constant. Application of the model to the data from an earlier experiment (Ratcliff, McKoon, & Tindall, 1994) uncovered a relationship between the shapes of z-transformed receiver operating characteristics and the behavior of response time distributions. Both are explained in the model by the behavior of the decision boundaries. For generality, we also applied the decision model to a 3-choice motion discrimination task and found it accounted for data better than a competing class of models. The confidence model presents a coherent account of confidence judgments and response time that cannot be explained with currently popular signal detection theory analyses or dual-process models of recognition. PMID:23915088
Estimation of slipping organ motion by registration with direction-dependent regularization.
Schmidt-Richberg, Alexander; Werner, René; Handels, Heinz; Ehrhardt, Jan
2012-01-01
Accurate estimation of respiratory motion is essential for many applications in medical 4D imaging, for example for radiotherapy of thoracic and abdominal tumors. It is usually done by non-linear registration of image scans at different states of the breathing cycle but without further modeling of specific physiological motion properties. In this context, the accurate computation of respiration-driven lung motion is especially challenging because this organ is sliding along the surrounding tissue during the breathing cycle, leading to discontinuities in the motion field. Without considering this property in the registration model, common intensity-based algorithms cause incorrect estimation along the object boundaries. In this paper, we present a model for incorporating slipping motion in image registration. Extending the common diffusion registration by distinguishing between normal- and tangential-directed motion, we are able to estimate slipping motion at the organ boundaries while preventing gaps and ensuring smooth motion fields inside and outside. We further present an algorithm for a fully automatic detection of discontinuities in the motion field, which does not rely on a prior segmentation of the organ. We evaluate the approach for the estimation of lung motion based on 23 inspiration/expiration pairs of thoracic CT images. The results show a visually more plausible motion estimation. Moreover, the target registration error is quantified using manually defined landmarks and a significant improvement over the standard diffusion regularization is shown. Copyright © 2011 Elsevier B.V. All rights reserved.
Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data
NASA Astrophysics Data System (ADS)
Chierici, F.; Embriaco, D.; Morucci, S.
2017-12-01
Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.
NASA Astrophysics Data System (ADS)
Yang, T.; Wang, Z.; Zhang, W.; Gbaguidi, A.; Sugimoto, N.; Matsui, I.; Wang, X.; Yele, S.
2017-12-01
Predicting air pollution events in low atmosphere over megacities requires thorough understanding of the tropospheric dynamic and chemical processes, involving notably, continuous and accurate determination of the boundary layer height (BLH). Through intensive observations experimented over Beijing (China), and an exhaustive evaluation existing algorithms applied to the BLH determination, persistent critical limitations are noticed, in particular over polluted episodes. Basically, under weak thermal convection with high aerosol loading, none of the retrieval algorithms is able to fully capture the diurnal cycle of the BLH due to pollutant insufficient vertical mixing in the boundary layer associated with the impact of gravity waves on the tropospheric structure. Subsequently, a new approach based on gravity wave theory (the cubic root gradient method: CRGM), is developed to overcome such weakness and accurately reproduce the fluctuations of the BLH under various atmospheric pollution conditions. Comprehensive evaluation of CRGM highlights its high performance in determining BLH from Lidar. In comparison with the existing retrieval algorithms, the CRGM potentially reduces related computational uncertainties and errors from BLH determination (strong increase of correlation coefficient from 0.44 to 0.91 and significant decreases of the root mean square error from 643 m to 142 m). Such newly developed technique is undoubtedly expected to contribute to improve the accuracy of air quality modelling and forecasting systems.
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohammed A.
2014-09-01
In this paper, we propose an efficient spectral collocation algorithm to solve numerically wave type equations subject to initial, boundary and non-local conservation conditions. The shifted Jacobi pseudospectral approximation is investigated for the discretization of the spatial variable of such equations. It possesses spectral accuracy in the spatial variable. The shifted Jacobi-Gauss-Lobatto (SJ-GL) quadrature rule is established for treating the non-local conservation conditions, and then the problem with its initial and non-local boundary conditions are reduced to a system of second-order ordinary differential equations in temporal variable. This system is solved by two-stage forth-order A-stable implicit RK scheme. Five numerical examples with comparisons are given. The computational results demonstrate that the proposed algorithm is more accurate than finite difference method, method of lines and spline collocation approach
Assimilation of Wave and Current Data for Prediction of Inlet and River Mouth Dynamics
2013-07-01
onto the Delft3D computational grid and the specification of Riemann -type boundary conditions for the boundary-normal velocity and surface elevation...conditions from time- history data from in situ tide gages. The corrections are applied to the surface-elevation contribution to the Riemann boundary...The algorithms described above are all of the strong-constraint variational variety, and make use of adjoint solvers corresponding to the various
NASA Astrophysics Data System (ADS)
Venkataraman, Sankar; Li, Wenjing
2008-03-01
Image analysis for automated diagnosis of cervical cancer has attained high prominence in the last decade. Automated image analysis at all levels requires a basic segmentation of the region of interest (ROI) within a given image. The precision of the diagnosis is often reflected by the precision in detecting the initial region of interest, especially when some features outside the ROI mimic the ones within the same. Work described here discusses algorithms that are used to improve the cervical region of interest as a part of automated cervical image diagnosis. A vital visual aid in diagnosing cervical cancer is the aceto-whitening of the cervix after the application of acetic acid. Color and texture are used to segment acetowhite regions within the cervical ROI. Vaginal walls along with cottonswabs sometimes mimic these essential features leading to several false positives. Work presented here is focused towards detecting in-focus vaginal wall boundaries and then extrapolating them to exclude vaginal walls from the cervical ROI. In addition, discussed here is a marker-controlled watershed segmentation that is used to detect cottonswabs from the cervical ROI. A dataset comprising 50 high resolution images of the cervix acquired after 60 seconds of acetic acid application were used to test the algorithm. Out of the 50 images, 27 benefited from a new cervical ROI. Significant improvement in overall diagnosis was observed in these images as false positives caused by features outside the actual ROI mimicking acetowhite region were eliminated.
Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan
2016-01-01
Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266
Quantification and Reconstruction in Photoacoustic Tomography
NASA Astrophysics Data System (ADS)
Guo, Zijian
Optical absorption is closely associated with many physiological important parameters, such as the concentration and oxygen saturation of hemoglobin. Conventionally, accurate quantification in PAT requires knowledge of the optical fluence attenuation, acoustic pressure attenuation, and detection bandwidth. We circumvent this requirement by quantifying the optical absorption coefficients from the acoustic spectra of PA signals acquired at multiple optical wavelengths. We demonstrate the method using the optical-resolution photoacoustic microscopy (OR-PAM) and the acoustical-resolution photoacoustic microscopy (AR-PAM) in the optical ballistic regime and in the optical diffusive regime, respectively. The data acquisition speed in photoacoustic computed tomography (PACT) is limited by the laser repetition rate and the number of parallel ultrasound detecting channels. Reconstructing an image with fewer measurements can effectively accelerate the data acquisition and reduce the system cost. We adapted Compressed Sensing (CS) for the reconstruction in PACT. CS-based PACT was implemented as a non-linear conjugate gradient descent algorithm and tested with both phantom and in vivo experiments. Speckles have been considered ubiquitous in all scattering-based coherent imaging technologies. As a coherent imaging modality based on optical absorption, photoacoustic (PA) tomography (PAT) is generally devoid of speckles. PAT suppresses speckles by building up prominent boundary signals, via a mechanism similar to that of specular reflection. When imaging smooth boundary absorbing targets, the speckle visibility in PAT, which is defined as the ratio of the square root of the average power of speckles to that of boundaries, is inversely proportional to the square root of the absorber density. If the surfaces of the absorbing targets have uncorrelated height fluctuations, however, the boundary features may become fully developed speckles. The findings were validated by simulations and experiments. The first- and second-order statistics of PAT speckles were also studied experimentally. While the amplitude of the speckles follows a Gaussian distribution, the autocorrelation of the speckle patterns tracks that of the system point spread function.
NASA Astrophysics Data System (ADS)
Peirce, Anthony P.; Rabitz, Herschel
1988-08-01
The boundary element (BE) technique is used to analyze the effect of defects on one-dimensional chemically active surfaces. The standard BE algorithm for diffusion is modified to include the effects of bulk desorption by making use of an asymptotic expansion technique to evaluate influences near boundaries and defect sites. An explicit time evolution scheme is proposed to treat the non-linear equations associated with defect sites. The proposed BE algorithm is shown to provide an efficient and convergent algorithm for modelling localized non-linear behavior. Since it exploits the actual Green's function of the linear diffusion-desorption process that takes place on the surface, the BE algorithm is extremely stable. The BE algorithm is applied to a number of interesting physical problems in which non-linear reactions occur at localized defects. The Lotka-Volterra system is considered in which the source, sink and predator-prey interaction terms are distributed at different defect sites in the domain and in which the defects are coupled by diffusion. This example provides a stringent test of the stability of the numerical algorithm. Marginal stability oscillations are analyzed for the Prigogine-Lefever reaction that occurs on a lattice of defects. Dissipative effects are observed for large perturbations to the marginal stability state, and rapid spatial reorganization of uniformly distributed initial perturbations is seen to take place. In another series of examples the effect of defect locations on the balance between desorptive processes on chemically active surfaces is considered. The effect of dynamic pulsing at various time-scales is considered for a one species reactive trapping model. Similar competitive behavior between neighboring defects previously observed for static adsorption levels is shown to persist for dynamic loading of the surface. The analysis of a more complex three species reaction process also provides evidence of competitive behavior between neighboring defect sites. The proposed BE algorithm is shown to provide a useful technique for analyzing the effect of defect sites on chemically active surfaces.
Paglieroni, David W [Pleasanton, CA; Manay, Siddharth [Livermore, CA
2011-12-20
A stochastic method and system for detecting polygon structures in images, by detecting a set of best matching corners of predetermined acuteness .alpha. of a polygon model from a set of similarity scores based on GDM features of corners, and tracking polygon boundaries as particle tracks using a sequential Monte Carlo approach. The tracking involves initializing polygon boundary tracking by selecting pairs of corners from the set of best matching corners to define a first side of a corresponding polygon boundary; tracking all intermediate sides of the polygon boundaries using a particle filter, and terminating polygon boundary tracking by determining the last side of the tracked polygon boundaries to close the polygon boundaries. The particle tracks are then blended to determine polygon matches, which may be made available, such as to a user, for ranking and inspection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elmagarmid, A.K.
The availability of distributed data bases is directly affected by the timely detection and resolution of deadlocks. Consequently, mechanisms are needed to make deadlock detection algorithms resilient to failures. Presented first is a centralized algorithm that allows transactions to have multiple requests outstanding. Next, a new distributed deadlock detection algorithm (DDDA) is presented, using a global detector (GD) to detect global deadlocks and local detectors (LDs) to detect local deadlocks. This algorithm essentially identifies transaction-resource interactions that m cause global (multisite) deadlocks. Third, a deadlock detection algorithm utilizing a transaction-wait-for (TWF) graph is presented. It is a fully disjoint algorithmmore » that allows multiple outstanding requests. The proposed algorithm can achieve improved overall performance by using multiple disjoint controllers coupled with the two-phase property while maintaining the simplicity of centralized schemes. Fourth, an algorithm that combines deadlock detection and avoidance is given. This algorithm uses concurrent transaction controllers and resource coordinators to achieve maximum distribution. The language of CSP is used to describe this algorithm. Finally, two efficient deadlock resolution protocols are given along with some guidelines to be used in choosing a transaction for abortion.« less
Multispectral processing based on groups of resolution elements
NASA Technical Reports Server (NTRS)
Richardson, W.; Gleason, J. M.
1975-01-01
Several nine-point rules are defined and compared with previously studied rules. One of the rules performed well in boundary areas, but with reduced efficiency in field interiors; another combined best performance on field interiors with good sensitivity to boundary detail. The basic threshold gradient and some modifications were investigated as a means of boundary point detection. The hypothesis testing methods of closed-boundary formation were also tested and evaluated. An analysis of the boundary detection problem was initiated, employing statistical signal detection and parameter estimation techniques to analyze various formulations of the problem. These formulations permit the atmospheric and sensor system effects on the data to be thoroughly analyzed. Various boundary features and necessary assumptions can also be investigated in this manner.
Analysis and asynchronous detection of gradually unfolding errors during monitoring tasks
NASA Astrophysics Data System (ADS)
Omedes, Jason; Iturrate, Iñaki; Minguez, Javier; Montesano, Luis
2015-10-01
Human studies on cognitive control processes rely on tasks involving sudden-onset stimuli, which allow the analysis of these neural imprints to be time-locked and relative to the stimuli onset. Human perceptual decisions, however, comprise continuous processes where evidence accumulates until reaching a boundary. Surpassing the boundary leads to a decision where measured brain responses are associated to an internal, unknown onset. The lack of this onset for gradual stimuli hinders both the analyses of brain activity and the training of detectors. This paper studies electroencephalographic (EEG)-measurable signatures of human processing for sudden and gradual cognitive processes represented as a trajectory mismatch under a monitoring task. Time-locked potentials and brain-source analysis of the EEG of sudden mismatches revealed the typical components of event-related potentials and the involvement of brain structures related to cognitive control processing. For gradual mismatch events, time-locked analyses did not show any discernible EEG scalp pattern, despite related brain areas being, to a lesser extent, activated. However, and thanks to the use of non-linear pattern recognition algorithms, it is possible to train an asynchronous detector on sudden events and use it to detect gradual mismatches, as well as obtaining an estimate of their unknown onset. Post-hoc time-locked scalp and brain-source analyses revealed that the EEG patterns of detected gradual mismatches originated in brain areas related to cognitive control processing. This indicates that gradual events induce latency in the evaluation process but that similar brain mechanisms are present in sudden and gradual mismatch events. Furthermore, the proposed asynchronous detection model widens the scope of applications of brain-machine interfaces to other gradual processes.
NASA Astrophysics Data System (ADS)
Hofmann, Ulrich; Siedersberger, Karl-Heinz
2003-09-01
Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates. In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle. For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.
NASA Astrophysics Data System (ADS)
Kotthaus, Simone; O'Connor, Ewan; Münkel, Christoph; Charlton-Perez, Cristina; Haeffelin, Martial; Gabey, Andrew M.; Grimmond, C. Sue B.
2016-08-01
Ceilometer lidars are used for cloud base height detection, to probe aerosol layers in the atmosphere (e.g. detection of elevated layers of Saharan dust or volcanic ash), and to examine boundary layer dynamics. Sensor optics and acquisition algorithms can strongly influence the observed attenuated backscatter profiles; therefore, physical interpretation of the profiles requires careful application of corrections. This study addresses the widely deployed Vaisala CL31 ceilometer. Attenuated backscatter profiles are studied to evaluate the impact of both the hardware generation and firmware version. In response to this work and discussion within the CL31/TOPROF user community (TOPROF, European COST Action aiming to harmonise ground-based remote sensing networks across Europe), Vaisala released new firmware (versions 1.72 and 2.03) for the CL31 sensors. These firmware versions are tested against previous versions, showing that several artificial features introduced by the data processing have been removed. Hence, it is recommended to use this recent firmware for analysing attenuated backscatter profiles. To allow for consistent processing of historic data, correction procedures have been developed that account for artefacts detected in data collected with older firmware. Furthermore, a procedure is proposed to determine and account for the instrument-related background signal from electronic and optical components. This is necessary for using attenuated backscatter observations from any CL31 ceilometer. Recommendations are made for the processing of attenuated backscatter observed with Vaisala CL31 sensors, including the estimation of noise which is not provided in the standard CL31 output. After taking these aspects into account, attenuated backscatter profiles from Vaisala CL31 ceilometers are considered capable of providing valuable information for a range of applications including atmospheric boundary layer studies, detection of elevated aerosol layers, and model verification.
NASA Technical Reports Server (NTRS)
Jentink, Thomas Neil; Usab, William J., Jr.
1990-01-01
An explicit, Multigrid algorithm was written to solve the Euler and Navier-Stokes equations with special consideration given to the coarse mesh boundary conditions. These are formulated in a manner consistent with the interior solution, utilizing forcing terms to prevent coarse-mesh truncation error from affecting the fine-mesh solution. A 4-Stage Hybrid Runge-Kutta Scheme is used to advance the solution in time, and Multigrid convergence is further enhanced by using local time-stepping and implicit residual smoothing. Details of the algorithm are presented along with a description of Jameson's standard Multigrid method and a new approach to formulating the Multigrid equations.
Solution of internal ballistic problem for SRM with grain of complex shape during main firing phase
NASA Astrophysics Data System (ADS)
Kiryushkin, A. E.; Minkov, L. L.
2017-10-01
Solid rocket motor (SRM) internal ballistics problems are related to the problems with moving boundaries. The algorithm able to solve similar problems in axisymmetric formulation on Cartesian mesh with an arbitrary order of accuracy is considered in this paper. The base of this algorithm is the ghost point extrapolation using inverse Lax-Wendroff procedure. Level set method is used as an implicit representation of the domain boundary. As an example, the internal ballistics problem for SRM with umbrella type grain was solved during the main firing phase. In addition, flow parameters distribution in the combustion chamber was obtained for different time moments.
Three-dimensional zonal grids about arbitrary shapes by Poisson's equation
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.
1988-01-01
A method for generating 3-D finite difference grids about or within arbitrary shapes is presented. The 3-D Poisson equations are solved numerically, with values for the inhomogeneous terms found automatically by the algorithm. Those inhomogeneous terms have the effect near boundaries of reducing cell skewness and imposing arbitrary cell height. The method allows the region of interest to be divided into zones (blocks), allowing the method to be applicable to almost any physical domain. A FORTRAN program called 3DGRAPE has been written to implement the algorithm. Lastly, a method for redistributing grid points along lines normal to boundaries will be described.
NASA Astrophysics Data System (ADS)
Geng, Weihua; Zhao, Shan
2017-12-01
We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.
NASA Astrophysics Data System (ADS)
Mola Ebrahimi, S.; Arefi, H.; Rasti Veis, H.
2017-09-01
Our paper aims to present a new approach to identify and extract building footprints using aerial images and LiDAR data. Employing an edge detector algorithm, our method first extracts the outer boundary of buildings, and then by taking advantage of Hough transform and extracting the boundary of connected buildings in a building block, it extracts building footprints located in each block. The proposed method first recognizes the predominant leading orientation of a building block using Hough transform, and then rotates the block according to the inverted complement of the dominant line's angle. Therefore the block poses horizontally. Afterwards, by use of another Hough transform, vertical lines, which might be the building boundaries of interest, are extracted and the final building footprints within a block are obtained. The proposed algorithm is implemented and tested on the urban area of Zeebruges, Belgium(IEEE Contest,2015). The areas of extracted footprints are compared to the corresponding areas in the reference data and mean error is equal to 7.43 m2. Besides, qualitative and quantitative evaluations suggest that the proposed algorithm leads to acceptable results in automated precise extraction of building footprints.
Zhou, Jianyong; Luo, Zu; Li, Chunquan; Deng, Mi
2018-01-01
When the meshless method is used to establish the mathematical-mechanical model of human soft tissues, it is necessary to define the space occupied by human tissues as the problem domain and the boundary of the domain as the surface of those tissues. Nodes should be distributed in both the problem domain and on the boundaries. Under external force, the displacement of the node is computed by the meshless method to represent the deformation of biological soft tissues. However, computation by the meshless method consumes too much time, which will affect the simulation of real-time deformation of human tissues in virtual surgery. In this article, the Marquardt's Algorithm is proposed to fit the nodal displacement at the problem domain's boundary and obtain the relationship between surface deformation and force. When different external forces are applied, the deformation of soft tissues can be quickly obtained based on this relationship. The analysis and discussion show that the improved model equations with Marquardt's Algorithm not only can simulate the deformation in real-time but also preserve the authenticity of the deformation model's physical properties. Copyright © 2017 Elsevier B.V. All rights reserved.
Boundary identification and error analysis of shocked material images
NASA Astrophysics Data System (ADS)
Hock, Margaret; Howard, Marylesa; Cooper, Leora; Meehan, Bernard; Nelson, Keith
2017-06-01
To compute quantities such as pressure and velocity from laser-induced shock waves propagating through materials, high-speed images are captured and analyzed. Shock images typically display high noise and spatially-varying intensities, causing conventional analysis techniques to have difficulty identifying boundaries in the images without making significant assumptions about the data. We present a novel machine learning algorithm that efficiently segments, or partitions, images with high noise and spatially-varying intensities, and provides error maps that describe a level of uncertainty in the partitioning. The user trains the algorithm by providing locations of known materials within the image but no assumptions are made on the geometries in the image. The error maps are used to provide lower and upper bounds on quantities of interest, such as velocity and pressure, once boundaries have been identified and propagated through equations of state. This algorithm will be demonstrated on images of shock waves with noise and aberrations to quantify properties of the wave as it progresses. DOE/NV/25946-3126 This work was done by National Security Technologies, LLC, under Contract No. DE- AC52-06NA25946 with the U.S. Department of Energy and supported by the SDRD Program.
SU-F-J-113: Multi-Atlas Based Automatic Organ Segmentation for Lung Radiotherapy Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J; Han, J; Ailawadi, S
Purpose: Normal organ segmentation is one time-consuming and labor-intensive step for lung radiotherapy treatment planning. The aim of this study is to evaluate the performance of a multi-atlas based segmentation approach for automatic organs at risk (OAR) delineation. Methods: Fifteen Lung stereotactic body radiation therapy patients were randomly selected. Planning CT images and OAR contours of the heart - HT, aorta - AO, vena cava - VC, pulmonary trunk - PT, and esophagus – ES were exported and used as reference and atlas sets. For automatic organ delineation for a given target CT, 1) all atlas sets were deformably warpedmore » to the target CT, 2) the deformed sets were accumulated and normalized to produce organ probability density (OPD) maps, and 3) the OPD maps were converted to contours via image thresholding. Optimal threshold for each organ was empirically determined by comparing the auto-segmented contours against their respective reference contours. The delineated results were evaluated by measuring contour similarity metrics: DICE, mean distance (MD), and true detection rate (TD), where DICE=(intersection volume/sum of two volumes) and TD = {1.0 - (false positive + false negative)/2.0}. Diffeomorphic Demons algorithm was employed for CT-CT deformable image registrations. Results: Optimal thresholds were determined to be 0.53 for HT, 0.38 for AO, 0.28 for PT, 0.43 for VC, and 0.31 for ES. The mean similarity metrics (DICE[%], MD[mm], TD[%]) were (88, 3.2, 89) for HT, (79, 3.2, 82) for AO, (75, 2.7, 77) for PT, (68, 3.4, 73) for VC, and (51,2.7, 60) for ES. Conclusion: The investigated multi-atlas based approach produced reliable segmentations for the organs with large and relatively clear boundaries (HT and AO). However, the detection of small and narrow organs with diffused boundaries (ES) were challenging. Sophisticated atlas selection and multi-atlas fusion algorithms may further improve the quality of segmentations.« less
Novel approach for image skeleton and distance transformation parallel algorithms
NASA Astrophysics Data System (ADS)
Qing, Kent P.; Means, Robert W.
1994-05-01
Image Understanding is more important in medical imaging than ever, particularly where real-time automatic inspection, screening and classification systems are installed. Skeleton and distance transformations are among the common operations that extract useful information from binary images and aid in Image Understanding. The distance transformation describes the objects in an image by labeling every pixel in each object with the distance to its nearest boundary. The skeleton algorithm starts from the distance transformation and finds the set of pixels that have a locally maximum label. The distance algorithm has to scan the entire image several times depending on the object width. For each pixel, the algorithm must access the neighboring pixels and find the maximum distance from the nearest boundary. It is a computational and memory access intensive procedure. In this paper, we propose a novel parallel approach to the distance transform and skeleton algorithms using the latest VLSI high- speed convolutional chips such as HNC's ViP. The algorithm speed is dependent on the object's width and takes (k + [(k-1)/3]) * 7 milliseconds for a 512 X 512 image with k being the maximum distance of the largest object. All objects in the image will be skeletonized at the same time in parallel.
Algorithm of composing the schedule of construction and installation works
NASA Astrophysics Data System (ADS)
Nehaj, Rustam; Molotkov, Georgij; Rudchenko, Ivan; Grinev, Anatolij; Sekisov, Aleksandr
2017-10-01
An algorithm for scheduling works is developed, in which the priority of the work corresponds to the total weight of the subordinate works, the vertices of the graph, and it is proved that for graphs of the tree type the algorithm is optimal. An algorithm is synthesized to reduce the search for solutions when drawing up schedules of construction and installation works, allocating a subset with the optimal solution of the problem of the minimum power, which is determined by the structure of its initial data and numerical values. An algorithm for scheduling construction and installation work is developed, taking into account the schedule for the movement of brigades, which is characterized by the possibility to efficiently calculate the values of minimizing the time of work performance by the parameters of organizational and technological reliability through the use of the branch and boundary method. The program of the computational algorithm was compiled in the MatLAB-2008 program. For the initial data of the matrix, random numbers were taken, uniformly distributed in the range from 1 to 100. It takes 0.5; 2.5; 7.5; 27 minutes to solve the problem. Thus, the proposed method for estimating the lower boundary of the solution is sufficiently accurate and allows efficient solution of the minimax task of scheduling construction and installation works.
Global Artificial Boundary Conditions for Computation of External Flow Problems with Propulsive Jets
NASA Technical Reports Server (NTRS)
Tsynkov, Semyon; Abarbanel, Saul; Nordstrom, Jan; Ryabenkii, Viktor; Vatsa, Veer
1998-01-01
We propose new global artificial boundary conditions (ABC's) for computation of flows with propulsive jets. The algorithm is based on application of the difference potentials method (DPM). Previously, similar boundary conditions have been implemented for calculation of external compressible viscous flows around finite bodies. The proposed modification substantially extends the applicability range of the DPM-based algorithm. In the paper, we present the general formulation of the problem, describe our numerical methodology, and discuss the corresponding computational results. The particular configuration that we analyze is a slender three-dimensional body with boat-tail geometry and supersonic jet exhaust in a subsonic external flow under zero angle of attack. Similarly to the results obtained earlier for the flows around airfoils and wings, current results for the jet flow case corroborate the superiority of the DPM-based ABC's over standard local methodologies from the standpoints of accuracy, overall numerical performance, and robustness.
A Human-in-the Loop Exploration of the Dynamic Airspace Configuration Concept
NASA Technical Reports Server (NTRS)
Homola, Jeffrey; Lee, Paul U.; Prevot, Thomas; Lee, Hwasoo; Kessell, Angela; Brasil, Connie; Smith, Nancy
2010-01-01
An exploratory human-in-the-loop study was conducted to better understand the impact of Dynamic Airspace Configuration (DAC) on air traffic controllers. To do so, a range of three progressively more aggressive algorithmic approaches to sectorizations were chosen. Sectorizations from these algorithms were used to test and quantify the range of impact on the controller and traffic. Results show that traffic count was more equitably distributed between the four test sectors and duration of counts over MAP were progressively lower as the magnitude of boundary change increased. However, taskload and workload were also shown to increase with the increase in aggressiveness and acceptability of the boundary changes decreased. Overall, simulated operations of the DAC concept did not appear to compromise safety. Feedback from the participants highlighted the importance of limiting some aspects of boundary changes such as amount of volume gained or lost and the extent of change relative to the initial airspace design.
First-Principle Construction of U(1) Symmetric Matrix Product States
NASA Astrophysics Data System (ADS)
Rakov, Mykhailo V.
2018-07-01
The algorithm to calculate the sets of symmetry sectors for virtual indices of U(1) symmetric matrix product states (MPS) is described. The principal differences between open (OBC) and periodic (PBC) boundary conditions are stressed, and the extension of PBC MPS algorithm to projected entangled pair states is outlined.
Calculating Shocks In Flows At Chemical Equilibrium
NASA Technical Reports Server (NTRS)
Eberhardt, Scott; Palmer, Grant
1988-01-01
Boundary conditions prove critical. Conference paper describes algorithm for calculation of shocks in hypersonic flows of gases at chemical equilibrium. Although algorithm represents intermediate stage in development of reliable, accurate computer code for two-dimensional flow, research leading up to it contributes to understanding of what is needed to complete task.
UAS Collision Avoidance Algorithm that Minimizes the Impact on Route Surveillance
2009-03-01
Appendix A: Collision Avoidance Algorithm/Virtual Cockpit Interface .......................124 Appendix B : Collision Cone Boundary Rates... b ) Split Cone (c) Multiple Intruders, Single and Split Cones [27] ........................................................ 27 3-3: Collision Cone...Approach in the Vertical Plane (a) Single Cone ( b ) Multiple Intruders, Single and Split Cone [27
Comparison of public peak detection algorithms for MALDI mass spectrometry data analysis.
Yang, Chao; He, Zengyou; Yu, Weichuan
2009-01-06
In mass spectrometry (MS) based proteomic data analysis, peak detection is an essential step for subsequent analysis. Recently, there has been significant progress in the development of various peak detection algorithms. However, neither a comprehensive survey nor an experimental comparison of these algorithms is yet available. The main objective of this paper is to provide such a survey and to compare the performance of single spectrum based peak detection methods. In general, we can decompose a peak detection procedure into three consequent parts: smoothing, baseline correction and peak finding. We first categorize existing peak detection algorithms according to the techniques used in different phases. Such a categorization reveals the differences and similarities among existing peak detection algorithms. Then, we choose five typical peak detection algorithms to conduct a comprehensive experimental study using both simulation data and real MALDI MS data. The results of comparison show that the continuous wavelet-based algorithm provides the best average performance.
NASA Astrophysics Data System (ADS)
Sayevand, K.; Pichaghchi, K.
2018-04-01
In this paper, we were concerned with the description of the singularly perturbed boundary value problems in the scope of fractional calculus. We should mention that, one of the main methods used to solve these problems in classical calculus is the so-called matched asymptotic expansion method. However we shall note that, this was not achievable via the existing classical definitions of fractional derivative, because they do not obey the chain rule which one of the key elements of the matched asymptotic expansion method. In order to accommodate this method to fractional derivative, we employ a relatively new derivative so-called the local fractional derivative. Using the properties of local fractional derivative, we extend the matched asymptotic expansion method to the scope of fractional calculus and introduce a reliable new algorithm to develop approximate solutions of the singularly perturbed boundary value problems of fractional order. In the new method, the original problem is partitioned into inner and outer solution equations. The reduced equation is solved with suitable boundary conditions which provide the terminal boundary conditions for the boundary layer correction. The inner solution problem is next solved as a solvable boundary value problem. The width of the boundary layer is approximated using appropriate resemblance function. Some theoretical results are established and proved. Some illustrating examples are solved and the results are compared with those of matched asymptotic expansion method and homotopy analysis method to demonstrate the accuracy and efficiency of the method. It can be observed that, the proposed method approximates the exact solution very well not only in the boundary layer, but also away from the layer.
A MULTI-STREAM MODEL FOR VERTICAL MIXING OF A PASSIVE TRACER IN THE CONVECTIVE BOUNDARY LAYER
We study a multi-stream model (MSM) for vertical mixing of a passive tracer in the convective boundary layer, in which the tracer is advected by many vertical streams with different probabilities and diffused by small scale turbulence. We test the MSM algorithm for investigatin...
NASA Astrophysics Data System (ADS)
Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.
2015-06-01
The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.
Yi, Chucai; Tian, Yingli
2012-09-01
In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
Validating an Air Traffic Management Concept of Operation Using Statistical Modeling
NASA Technical Reports Server (NTRS)
He, Yuning; Davies, Misty Dawn
2013-01-01
Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis
NASA Technical Reports Server (NTRS)
Cacio, Emanuela; Cohn, Stephen E.; Spigler, Renato
2011-01-01
A numerical method is devised to solve a class of linear boundary-value problems for one-dimensional parabolic equations degenerate at the boundaries. Feller theory, which classifies the nature of the boundary points, is used to decide whether boundary conditions are needed to ensure uniqueness, and, if so, which ones they are. The algorithm is based on a suitable preconditioned implicit finite-difference scheme, grid, and treatment of the boundary data. Second-order accuracy, unconditional stability, and unconditional convergence of solutions of the finite-difference scheme to a constant as the time-step index tends to infinity are further properties of the method. Several examples, pertaining to financial mathematics, physics, and genetics, are presented for the purpose of illustration.
NASA Astrophysics Data System (ADS)
Bal, A.; Alam, M. S.; Aslan, M. S.
2006-05-01
Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.
Adaboost multi-view face detection based on YCgCr skin color model
NASA Astrophysics Data System (ADS)
Lan, Qi; Xu, Zhiyong
2016-09-01
Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.
NASA Astrophysics Data System (ADS)
Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan
2018-03-01
False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGhee, J.M.; Roberts, R.M.; Morel, J.E.
1997-06-01
A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less
Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine
NASA Astrophysics Data System (ADS)
Erdogan, Gamze; Yavuz, Mahmut
2017-12-01
The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.
NASA Astrophysics Data System (ADS)
Diaz, Kristians; Castaneda, Benjamin
2008-03-01
This paper presents a semi-automated algorithm for prostate boundary segmentation from three-dimensional (3D) ultrasound (US) images. The US volume is sampled into 72 slices which go through the center of the prostate gland and are separated at a uniform angular spacing of 2.5 degrees. The approach requires the user to select four points from slices (at 0, 45, 90 and 135 degrees) which are used to initialize a discrete dynamic contour (DDC) algorithm. 4 Support Vector Machines (SVMs) are trained over the output of the DDC and classify the rest of the slices. The output of the SVMs is refined using binary morphological operations and DDC to produce the final result. The algorithm was tested on seven ex vivo 3D US images of prostate glands embedded in an agar mold. Results show good agreement with manual segmentation.
Electromagnetic MUSIC-type imaging of perfectly conducting, arc-like cracks at single frequency
NASA Astrophysics Data System (ADS)
Park, Won-Kwang; Lesselier, Dominique
2009-11-01
We propose a non-iterative MUSIC (MUltiple SIgnal Classification)-type algorithm for the time-harmonic electromagnetic imaging of one or more perfectly conducting, arc-like cracks found within a homogeneous space R2. The algorithm is based on a factorization of the Multi-Static Response (MSR) matrix collected in the far-field at a single, nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition), followed by the calculation of a MUSIC cost functional expected to exhibit peaks along the crack curves each half a wavelength. Numerical experimentation from exact, noiseless and noisy data shows that this is indeed the case and that the proposed algorithm behaves in robust manner, with better results in the TM mode than in the TE mode for which one would have to estimate the normal to the crack to get the most optimal results.
Automatic Data Filter Customization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Mandrake, Lukas
2013-01-01
This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.
A scalable approach for tree segmentation within small-footprint airborne LiDAR data
NASA Astrophysics Data System (ADS)
Hamraz, Hamid; Contreras, Marco A.; Zhang, Jun
2017-05-01
This paper presents a distributed approach that scales up to segment tree crowns within a LiDAR point cloud representing an arbitrarily large forested area. The approach uses a single-processor tree segmentation algorithm as a building block in order to process the data delivered in the shape of tiles in parallel. The distributed processing is performed in a master-slave manner, in which the master maintains the global map of the tiles and coordinates the slaves that segment tree crowns within and across the boundaries of the tiles. A minimal bias was introduced to the number of detected trees because of trees lying across the tile boundaries, which was quantified and adjusted for. Theoretical and experimental analyses of the runtime of the approach revealed a near linear speedup. The estimated number of trees categorized by crown class and the associated error margins as well as the height distribution of the detected trees aligned well with field estimations, verifying that the distributed approach works correctly. The approach enables providing information of individual tree locations and point cloud segments for a forest-level area in a timely manner, which can be used to create detailed remotely sensed forest inventories. Although the approach was presented for tree segmentation within LiDAR point clouds, the idea can also be generalized to scale up processing other big spatial datasets.
Moving charged particles in lattice Boltzmann-based electrokinetics
NASA Astrophysics Data System (ADS)
Kuron, Michael; Rempfer, Georg; Schornbaum, Florian; Bauer, Martin; Godenschwager, Christian; Holm, Christian; de Graaf, Joost
2016-12-01
The motion of ionic solutes and charged particles under the influence of an electric field and the ensuing hydrodynamic flow of the underlying solvent is ubiquitous in aqueous colloidal suspensions. The physics of such systems is described by a coupled set of differential equations, along with boundary conditions, collectively referred to as the electrokinetic equations. Capuani et al. [J. Chem. Phys. 121, 973 (2004)] introduced a lattice-based method for solving this system of equations, which builds upon the lattice Boltzmann algorithm for the simulation of hydrodynamic flow and exploits computational locality. However, thus far, a description of how to incorporate moving boundary conditions into the Capuani scheme has been lacking. Moving boundary conditions are needed to simulate multiple arbitrarily moving colloids. In this paper, we detail how to introduce such a particle coupling scheme, based on an analogue to the moving boundary method for the pure lattice Boltzmann solver. The key ingredients in our method are mass and charge conservation for the solute species and a partial-volume smoothing of the solute fluxes to minimize discretization artifacts. We demonstrate our algorithm's effectiveness by simulating the electrophoresis of charged spheres in an external field; for a single sphere we compare to the equivalent electro-osmotic (co-moving) problem. Our method's efficiency and ease of implementation should prove beneficial to future simulations of the dynamics in a wide range of complex nanoscopic and colloidal systems that were previously inaccessible to lattice-based continuum algorithms.
The asymptotic spectra of banded Toeplitz and quasi-Toeplitz matrices
NASA Technical Reports Server (NTRS)
Beam, Richard M.; Warming, Robert F.
1991-01-01
Toeplitz matrices occur in many mathematical, as well as, scientific and engineering investigations. This paper considers the spectra of banded Toeplitz and quasi-Toeplitz matrices with emphasis on non-normal matrices of arbitrarily large order and relatively small bandwidth. These are the type of matrices that appear in the investigation of stability and convergence of difference approximations to partial differential equations. Quasi-Toeplitz matrices are the result of non-Dirichlet boundary conditions for the difference approximations. The eigenvalue problem for a banded Toeplitz or quasi-Toeplitz matrix of large order is, in general, analytically intractable and (for non-normal matrices) numerically unreliable. An asymptotic (matrix order approaches infinity) approach partitions the eigenvalue analysis of a quasi-Toeplitz matrix into two parts, namely the analysis for the boundary condition independent spectrum and the analysis for the boundary condition dependent spectrum. The boundary condition independent spectrum is the same as the pure Toeplitz matrix spectrum. Algorithms for computing both parts of the spectrum are presented. Examples are used to demonstrate the utility of the algorithms, to present some interesting spectra, and to point out some of the numerical difficulties encountered when conventional matrix eigenvalue routines are employed for non-normal matrices of large order. The analysis for the Toeplitz spectrum also leads to a diagonal similarity transformation that improves conventional numerical eigenvalue computations. Finally, the algorithm for the asymptotic spectrum is extended to the Toeplitz generalized eigenvalue problem which occurs, for example, in the stability of Pade type difference approximations to differential equations.
Jeppesen, J; Beniczky, S; Fuglsang Frederiksen, A; Sidenius, P; Johansen, P
2017-07-01
Earlier studies have shown that short term heart rate variability (HRV) analysis of ECG seems promising for detection of epileptic seizures. A precise and accurate automatic R-peak detection algorithm is a necessity in a real-time, continuous measurement of HRV, in a portable ECG device. We used the portable CE marked ePatch® heart monitor to record the ECG of 14 patients, who were enrolled in the videoEEG long term monitoring unit for clinical workup of epilepsy. Recordings of the first 7 patients were used as training set of data for the R-peak detection algorithm and the recordings of the last 7 patients (467.6 recording hours) were used to test the performance of the algorithm. We aimed to modify an existing QRS-detection algorithm to a more precise R-peak detection algorithm to avoid the possible jitter Qand S-peaks can create in the tachogram, which causes error in short-term HRVanalysis. The proposed R-peak detection algorithm showed a high sensitivity (Se = 99.979%) and positive predictive value (P+ = 99.976%), which was comparable with a previously published QRS-detection algorithm for the ePatch® ECG device, when testing the same dataset. The novel R-peak detection algorithm designed to avoid jitter has very high sensitivity and specificity and thus is a suitable tool for a robust, fast, real-time HRV-analysis in patients with epilepsy, creating the possibility for real-time seizure detection for these patients.
Chan, Eugene; Rose, L R Francis; Wang, Chun H
2015-05-01
Existing damage imaging algorithms for detecting and quantifying structural defects, particularly those based on diffraction tomography, assume far-field conditions for the scattered field data. This paper presents a major extension of diffraction tomography that can overcome this limitation and utilises a near-field multi-static data matrix as the input data. This new algorithm, which employs numerical solutions of the dynamic Green's functions, makes it possible to quantitatively image laminar damage even in complex structures for which the dynamic Green's functions are not available analytically. To validate this new method, the numerical Green's functions and the multi-static data matrix for laminar damage in flat and stiffened isotropic plates are first determined using finite element models. Next, these results are time-gated to remove boundary reflections, followed by discrete Fourier transform to obtain the amplitude and phase information for both the baseline (damage-free) and the scattered wave fields. Using these computationally generated results and experimental verification, it is shown that the new imaging algorithm is capable of accurately determining the damage geometry, size and severity for a variety of damage sizes and shapes, including multi-site damage. Some aspects of minimal sensors requirement pertinent to image quality and practical implementation are also briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Changes to the COS Extraction Algorithm for Lifetime Position 3
NASA Astrophysics Data System (ADS)
Proffitt, Charles R.; Bostroem, K. Azalee; Ely, Justin; Foster, Deatrick; Hernandez, Svea; Hodge, Philip; Jedrzejewski, Robert I.; Lockwood, Sean A.; Massa, Derck; Peeples, Molly S.; Oliveira, Cristina M.; Penton, Steven V.; Plesha, Rachel; Roman-Duval, Julia; Sana, Hugues; Sahnow, David J.; Sonnentrucker, Paule; Taylor, Joanna M.
2015-09-01
The COS FUV Detector Lifetime Position 3 (LP3) has been placed only 2.5" below the original lifetime position (LP1). This is sufficiently close to gain-sagged regions at LP1 that a revised extraction algorithm is needed to ensure good spectral quality. We provide an overview of this new "TWOZONE" extraction algorithm, discuss its strengths and limitations, describe new output columns in the X1D files that show the boundaries of the new extraction regions, and provide some advice on how to manually tune the algorithm for specialized applications.
NASA Astrophysics Data System (ADS)
Ghulam Saber, Md; Arif Shahriar, Kh; Ahmed, Ashik; Hasan Sagor, Rakibul
2016-10-01
Particle swarm optimization (PSO) and invasive weed optimization (IWO) algorithms are used for extracting the modeling parameters of materials useful for optics and photonics research community. These two bio-inspired algorithms are used here for the first time in this particular field to the best of our knowledge. The algorithms are used for modeling graphene oxide and the performances of the two are compared. Two objective functions are used for different boundary values. Root mean square (RMS) deviation is determined and compared.
Rooijakkers, Michiel; Rabotti, Chiara; Bennebroek, Martijn; van Meerbergen, Jef; Mischi, Massimo
2011-01-01
Non-invasive fetal health monitoring during pregnancy has become increasingly important. Recent advances in signal processing technology have enabled fetal monitoring during pregnancy, using abdominal ECG recordings. Ubiquitous ambulatory monitoring for continuous fetal health measurement is however still unfeasible due to the computational complexity of noise robust solutions. In this paper an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, while increasing the R-peak detection quality compared to existing R-peak detection schemes. Validation of the algorithm is performed on two manually annotated datasets, the MIT/BIH Arrhythmia database and an in-house abdominal database. Both R-peak detection quality and computational complexity are compared to state-of-the-art algorithms as described in the literature. With a detection error rate of 0.22% and 0.12% on the MIT/BIH Arrhythmia and in-house databases, respectively, the quality of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.
NASA Technical Reports Server (NTRS)
Werner, Frank; Wind, Galina; Zhang, Zhibo; Platnick, Steven; Di Girolamo, Larry; Zhao, Guangyu; Amarasinghe, Nandana; Meyer, Kerry
2016-01-01
A research-level retrieval algorithm for cloud optical and microphysical properties is developed for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) aboard the Terra satellite. It is based on the operational MODIS algorithm. This paper documents the technical details of this algorithm and evaluates the retrievals for selected marine boundary layer cloud scenes through comparisons with the operational MODIS Data Collection 6 (C6) cloud product. The newly developed, ASTERspecific cloud masking algorithm is evaluated through comparison with an independent algorithm reported in Zhao and Di Girolamo (2006). To validate and evaluate the cloud optical thickness (tau) and cloud effective radius (r(sub eff)) from ASTER, the high-spatial-resolution ASTER observations are first aggregated to the same 1000m resolution as MODIS. Subsequently, tau(sub aA) and r(sub eff, aA) retrieved from the aggregated ASTER radiances are compared with the collocated MODIS retrievals. For overcast pixels, the two data sets agree very well with Pearson's product-moment correlation coefficients of R greater than 0.970. However, for partially cloudy pixels there are significant differences between r(sub eff, aA) and the MODIS results which can exceed 10 micrometers. Moreover, it is shown that the numerous delicate cloud structures in the example marine boundary layer scenes, resolved by the high-resolution ASTER retrievals, are smoothed by the MODIS observations. The overall good agreement between the research-level ASTER results and the operational MODIS C6 products proves the feasibility of MODIS-like retrievals from ASTER reflectance measurements and provides the basis for future studies concerning the scale dependency of satellite observations and three-dimensional radiative effects.
NASA Astrophysics Data System (ADS)
Werner, Frank; Wind, Galina; Zhang, Zhibo; Platnick, Steven; Di Girolamo, Larry; Zhao, Guangyu; Amarasinghe, Nandana; Meyer, Kerry
2016-12-01
A research-level retrieval algorithm for cloud optical and microphysical properties is developed for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) aboard the Terra satellite. It is based on the operational MODIS algorithm. This paper documents the technical details of this algorithm and evaluates the retrievals for selected marine boundary layer cloud scenes through comparisons with the operational MODIS Data Collection 6 (C6) cloud product. The newly developed, ASTER-specific cloud masking algorithm is evaluated through comparison with an independent algorithm reported in [Zhao and Di Girolamo(2006)]. To validate and evaluate the cloud optical thickness (τ) and cloud effective radius (reff) from ASTER, the high-spatial-resolution ASTER observations are first aggregated to the same 1000 m resolution as MODIS. Subsequently, τaA and reff,
NASA Astrophysics Data System (ADS)
Joshi, Neha; Mitchard, Edward TA; Woo, Natalia; Torres, Jorge; Moll-Rocek, Julian; Ehammer, Andrea; Collins, Murray; Jepsen, Martin R.; Fensholt, Rasmus
2015-03-01
Mapping anthropogenic forest disturbances has largely been focused on distinct delineations of events of deforestation using optical satellite images. In the tropics, frequent cloud cover and the challenge of quantifying forest degradation remain problematic. In this study, we detect processes of deforestation, forest degradation and successional dynamics, using long-wavelength radar (L-band from ALOS PALSAR) backscatter. We present a detection algorithm that allows for repeated disturbances on the same land, and identifies areas with slow- and fast-recovering changes in backscatter in close spatial and temporal proximity. In the study area in Madre de Dios, Peru, 2.3% of land was found to be disturbed over three years, with a false positive rate of 0.3% of area. A low, but significant, detection rate of degradation from sparse and small-scale selective logging was achieved. Disturbances were most common along the tri-national Interoceanic Highway, as well as in mining areas and areas under no land use allocation. A continuous spatial gradient of disturbance was observed, highlighting artefacts arising from imposing discrete boundaries on deforestation events. The magnitude of initial radar backscatter, and backscatter decrease, suggested that large-scale deforestation was likely in areas with initially low biomass, either naturally or since already under anthropogenic use. Further, backscatter increases following disturbance suggested that radar can be used to characterize successional disturbance dynamics, such as biomass accumulation in lands post-abandonment. The presented radar-based detection algorithm is spatially and temporally scalable, and can support monitoring degradation and deforestation in tropical rainforests with the use of products from ALOS-2 and the future SAOCOM and BIOMASS missions.
Zhang, Yanju; Lameijer, Eric-Wubbo; 't Hoen, Peter A. C.; Ning, Zemin; Slagboom, P. Eline; Ye, Kai
2012-01-01
Motivation: RNA-seq is a powerful technology for the study of transcriptome profiles that uses deep-sequencing technologies. Moreover, it may be used for cellular phenotyping and help establishing the etiology of diseases characterized by abnormal splicing patterns. In RNA-Seq, the exact nature of splicing events is buried in the reads that span exon–exon boundaries. The accurate and efficient mapping of these reads to the reference genome is a major challenge. Results: We developed PASSion, a pattern growth algorithm-based pipeline for splice site detection in paired-end RNA-Seq reads. Comparing the performance of PASSion to three existing RNA-Seq analysis pipelines, TopHat, MapSplice and HMMSplicer, revealed that PASSion is competitive with these packages. Moreover, the performance of PASSion is not affected by read length and coverage. It performs better than the other three approaches when detecting junctions in highly abundant transcripts. PASSion has the ability to detect junctions that do not have known splicing motifs, which cannot be found by the other tools. Of the two public RNA-Seq datasets, PASSion predicted ∼ 137 000 and 173 000 splicing events, of which on average 82 are known junctions annotated in the Ensembl transcript database and 18% are novel. In addition, our package can discover differential and shared splicing patterns among multiple samples. Availability: The code and utilities can be freely downloaded from https://trac.nbic.nl/passion and ftp://ftp.sanger.ac.uk/pub/zn1/passion Contact: y.zhang@lumc.nl; k.ye@lumc.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22219203
Fast Drawing of Traffic Sign Using Mobile Mapping System
NASA Astrophysics Data System (ADS)
Yao, Q.; Tan, B.; Huang, Y.
2016-06-01
Traffic sign provides road users with the specified instruction and information to enhance traffic safety. Automatic detection of traffic sign is important for navigation, autonomous driving, transportation asset management, etc. With the advance of laser and imaging sensors, Mobile Mapping System (MMS) becomes widely used in transportation agencies to map the transportation infrastructure. Although many algorithms of traffic sign detection are developed in the literature, they are still a tradeoff between the detection speed and accuracy, especially for the large-scale mobile mapping of both the rural and urban roads. This paper is motivated to efficiently survey traffic signs while mapping the road network and the roadside landscape. Inspired by the manual delineation of traffic sign, a drawing strategy is proposed to quickly approximate the boundary of traffic sign. Both the shape and color prior of the traffic sign are simultaneously involved during the drawing process. The most common speed-limit sign circle and the statistic color model of traffic sign are studied in this paper. Anchor points of traffic sign edge are located with the local maxima of color and gradient difference. Starting with the anchor points, contour of traffic sign is drawn smartly along the most significant direction of color and intensity consistency. The drawing process is also constrained by the curvature feature of the traffic sign circle. The drawing of linear growth is discarded immediately if it fails to form an arc over some steps. The Kalman filter principle is adopted to predict the temporal context of traffic sign. Based on the estimated point,we can predict and double check the traffic sign in consecutive frames.The event probability of having a traffic sign over the consecutive observations is compared with the null hypothesis of no perceptible traffic sign. The temporally salient traffic sign is then detected statistically and automatically as the rare event of having a traffic sign.The proposed algorithm is tested with a diverse set of images that are taken inWuhan, China with theMMS ofWuhan University. Experimental results demonstrate that the proposed algorithm can detect traffic signs at the rate of over 80% in around 10 milliseconds. It is promising for the large-scale traffic sign survey and change detection using the mobile mapping system.
Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia
NASA Astrophysics Data System (ADS)
Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin
2013-10-01
This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.
[Remote sensing of atmospheric trace gas by airborne passive FTIR].
Gao, Min-quang; Liu, Wen-qing; Zhang, Tian-shu; Liu, Jian-guo; Lu, Yi-huai; Wang, Ya-ping; Xu, Liang; Zhu, Jun; Chen, Jun
2006-12-01
The present article describes the details of aviatic measurement for remote sensing trace gases in atmosphere under various surface backgrounds with airborne passive FTIR. The passive down viewing and remote sensing technique used in the experiment is discussed. The method of acquiring atmospheric trace gases infrared characteristic spectra in complicated background and the algorithm of concentration retrieval are discussed. The concentrations of CO and N2O of boundary-layer atmosphere in experimental region below 1000 m are analyzed quantitatively. This measurement technique and the data analysis method, which does not require a previously measured background spectrum, allow fast and mobile remote detection and identification of atmosphere trace gas in large area, and also can be used for urgent monitoring of pollution accidental breakout.
Automatic video segmentation and indexing
NASA Astrophysics Data System (ADS)
Chahir, Youssef; Chen, Liming
1999-08-01
Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.
Adaptive density trajectory cluster based on time and space distance
NASA Astrophysics Data System (ADS)
Liu, Fagui; Zhang, Zhijie
2017-10-01
There are some hotspot problems remaining in trajectory cluster for discovering mobile behavior regularity, such as the computation of distance between sub trajectories, the setting of parameter values in cluster algorithm and the uncertainty/boundary problem of data set. As a result, based on the time and space, this paper tries to define the calculation method of distance between sub trajectories. The significance of distance calculation for sub trajectories is to clearly reveal the differences in moving trajectories and to promote the accuracy of cluster algorithm. Besides, a novel adaptive density trajectory cluster algorithm is proposed, in which cluster radius is computed through using the density of data distribution. In addition, cluster centers and number are selected by a certain strategy automatically, and uncertainty/boundary problem of data set is solved by designed weighted rough c-means. Experimental results demonstrate that the proposed algorithm can perform the fuzzy trajectory cluster effectively on the basis of the time and space distance, and obtain the optimal cluster centers and rich cluster results information adaptably for excavating the features of mobile behavior in mobile and sociology network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKone, T.E.; Bennett, D.H.
2002-08-01
In multimedia mass-balance models, the soil compartment is an important sink as well as a conduit for transfers to vegetation and shallow groundwater. Here a novel approach for constructing soil transport algorithms for multimedia fate models is developed and evaluated. The resulting algorithms account for diffusion in gas and liquid components; advection in gas, liquid, or solid phases; and multiple transformation processes. They also provide an explicit quantification of the characteristic soil penetration depth. We construct a compartment model using three and four soil layers to replicate with high reliability the flux and mass distribution obtained from the exact analyticalmore » solution describing the transient dispersion, advection, and transformation of chemicals in soil with fixed properties and boundary conditions. Unlike the analytical solution, which requires fixed boundary conditions, the soil compartment algorithms can be dynamically linked to other compartments (air, vegetation, ground water, surface water) in multimedia fate models. We demonstrate and evaluate the performance of the algorithms in a model with applications to benzene, benzo(a)pyrene, MTBE, TCDD, and tritium.« less
Comparison of various contact algorithms for poroelastic tissues.
Galbusera, Fabio; Bashkuev, Maxim; Wilke, Hans-Joachim; Shirazi-Adl, Aboulfazl; Schmidt, Hendrik
2014-01-01
Capabilities of the commercial finite element package ABAQUS in simulating frictionless contact between two saturated porous structures were evaluated and compared with those of an open source code, FEBio. In ABAQUS, both the default contact implementation and another algorithm based on an iterative approach requiring script programming were considered. Test simulations included a patch test of two cylindrical slabs in a gapless contact and confined compression conditions; a confined compression test of a porous cylindrical slab with a spherical porous indenter; and finally two unconfined compression tests of soft tissues mimicking diarthrodial joints. The patch test showed almost identical results for all algorithms. On the contrary, the confined and unconfined compression tests demonstrated large differences related to distinct physical and boundary conditions considered in each of the three contact algorithms investigated in this study. In general, contact with non-uniform gaps between fluid-filled porous structures could be effectively simulated with either ABAQUS or FEBio. The user should be aware of the parameter definitions, assumptions and limitations in each case, and take into consideration the physics and boundary conditions of the problem of interest when searching for the most appropriate model.
Improving graph-based OCT segmentation for severe pathology in retinitis pigmentosa patients
NASA Astrophysics Data System (ADS)
Lang, Andrew; Carass, Aaron; Bittner, Ava K.; Ying, Howard S.; Prince, Jerry L.
2017-03-01
Three dimensional segmentation of macular optical coherence tomography (OCT) data of subjects with retinitis pigmentosa (RP) is a challenging problem due to the disappearance of the photoreceptor layers, which causes algorithms developed for segmentation of healthy data to perform poorly on RP patients. In this work, we present enhancements to a previously developed graph-based OCT segmentation pipeline to enable processing of RP data. The algorithm segments eight retinal layers in RP data by relaxing constraints on the thickness and smoothness of each layer learned from healthy data. Following from prior work, a random forest classifier is first trained on the RP data to estimate boundary probabilities, which are used by a graph search algorithm to find the optimal set of nine surfaces that fit the data. Due to the intensity disparity between normal layers of healthy controls and layers in various stages of degeneration in RP patients, an additional intensity normalization step is introduced. Leave-one-out validation on data acquired from nine subjects showed an average overall boundary error of 4.22 μm as compared to 6.02 μm using the original algorithm.
Bellaïche, Yohanns; Bosveld, Floris; Graner, François; Mikula, Karol; Remesíková, Mariana; Smísek, Michal
2011-01-01
In this paper, we present a novel algorithm for tracking cells in time lapse confocal microscopy movie of a Drosophila epithelial tissue during pupal morphogenesis. We consider a 2D + time video as a 3D static image, where frames are stacked atop each other, and using a spatio-temporal segmentation algorithm we obtain information about spatio-temporal 3D tubes representing evolutions of cells. The main idea for tracking is the usage of two distance functions--first one from the cells in the initial frame and second one from segmented boundaries. We track the cells backwards in time. The first distance function attracts the subsequently constructed cell trajectories to the cells in the initial frame and the second one forces them to be close to centerlines of the segmented tubular structures. This makes our tracking algorithm robust against noise and missing spatio-temporal boundaries. This approach can be generalized to a 3D + time video analysis, where spatio-temporal tubes are 4D objects.
NASA Technical Reports Server (NTRS)
Baker, A. J.; Orzechowski, J. A.
1980-01-01
A theoretical analysis is presented yielding sets of partial differential equations for determination of turbulent aerodynamic flowfields in the vicinity of an airfoil trailing edge. A four phase interaction algorithm is derived to complete the analysis. Following input, the first computational phase is an elementary viscous corrected two dimensional potential flow solution yielding an estimate of the inviscid-flow induced pressure distribution. Phase C involves solution of the turbulent two dimensional boundary layer equations over the trailing edge, with transition to a two dimensional parabolic Navier-Stokes equation system describing the near-wake merging of the upper and lower surface boundary layers. An iteration provides refinement of the potential flow induced pressure coupling to the viscous flow solutions. The final phase is a complete two dimensional Navier-Stokes analysis of the wake flow in the vicinity of a blunt-bases airfoil. A finite element numerical algorithm is presented which is applicable to solution of all partial differential equation sets of inviscid-viscous aerodynamic interaction algorithm. Numerical results are discussed.
Motion estimation accuracy for visible-light/gamma-ray imaging fusion for portable portal monitoring
NASA Astrophysics Data System (ADS)
Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Gee, Timothy F.
2010-01-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Portable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest. We have constructed a prototype, rapid-deployment portal gamma-ray imaging portal monitor that uses machine vision and gamma-ray imaging to monitor multiple lanes of traffic. Vehicles are detected and tracked by using point detection and optical flow methods as implemented in the OpenCV software library. Points are clustered together but imperfections in the detected points and tracks cause errors in the accuracy of the vehicle position estimates. The resulting errors cause a "blurring" effect in the gamma image of the vehicle. To minimize these errors, we have compared a variety of motion estimation techniques including an estimate using the median of the clustered points, a "best-track" filtering algorithm, and a constant velocity motion estimation model. The accuracy of these methods are contrasted and compared to a manually verified ground-truth measurement by quantifying the rootmean- square differences in the times the vehicles cross the gamma-ray image pixel boundaries compared with a groundtruth manual measurement.
Estimation of coefficients and boundary parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Murphy, K. A.
1984-01-01
Semi-discrete Galerkin approximation schemes are considered in connection with inverse problems for the estimation of spatially varying coefficients and boundary condition parameters in second order hyperbolic systems typical of those arising in 1-D surface seismic problems. Spline based algorithms are proposed for which theoretical convergence results along with a representative sample of numerical findings are given.
NASA Technical Reports Server (NTRS)
Britt, Charles L.; Bracalente, Emedio M.
1992-01-01
The algorithms used in the NASA experimental wind shear radar system for detection, characterization, and determination of windshear hazard are discussed. The performance of the algorithms in the detection of wet microbursts near Orlando is presented. Various suggested algorithms that are currently being evaluated using the flight test results from Denver and Orlando are reviewed.
Artificial Boundary Conditions for Finite Element Model Update and Damage Detection
2017-03-01
BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION by Emmanouil Damanakis March 2017 Thesis Advisor: Joshua H. Gordis...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE ARTIFICIAL BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION...release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) In structural engineering, a finite element model is often
Beck, Cornelia; Ognibeni, Thilo; Neumann, Heiko
2008-01-01
Background Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it. Methodology/Principal Findings From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection. Conclusions/Significance A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion. PMID:19043613
Inter-method Performance Study of Tumor Volumetry Assessment on Computed Tomography Test-retest Data
Buckler, Andrew J.; Danagoulian, Jovanna; Johnson, Kjell; Peskin, Adele; Gavrielides, Marios A.; Petrick, Nicholas; Obuchowski, Nancy A.; Beaumont, Hubert; Hadjiiski, Lubomir; Jarecha, Rudresh; Kuhnigk, Jan-Martin; Mantri, Ninad; McNitt-Gray, Michael; Moltz, Jan Hendrik; Nyiri, Gergely; Peterson, Sam; Tervé, Pierre; Tietjen, Christian; von Lavante, Etienne; Ma, Xiaonan; Pierre, Samantha St.; Athelogou, Maria
2015-01-01
Rationale and objectives Tumor volume change has potential as a biomarker for diagnosis, therapy planning, and treatment response. Precision was evaluated and compared among semi-automated lung tumor volume measurement algorithms from clinical thoracic CT datasets. The results inform approaches and testing requirements for establishing conformance with the Quantitative Imaging Biomarker Alliance (QIBA) CT Volumetry Profile. Materials and Methods Industry and academic groups participated in a challenge study. Intra-algorithm repeatability and inter-algorithm reproducibility were estimated. Relative magnitudes of various sources of variability were estimated using a linear mixed effects model. Segmentation boundaries were compared to provide a basis on which to optimize algorithm performance for developers. Results Intra-algorithm repeatability ranged from 13% (best performing) to 100% (least performing), with most algorithms demonstrating improved repeatability as the tumor size increased. Inter-algorithm reproducibility determined in three partitions and found to be 58% for the four best performing groups, 70% for the set of groups meeting repeatability requirements, and 84% when all groups but the least performer were included. The best performing partition performed markedly better on tumors with equivalent diameters above 40 mm. Larger tumors benefitted by human editing but smaller tumors did not. One-fifth to one-half of the total variability came from sources independent of the algorithms. Segmentation boundaries differed substantially, not just in overall volume but in detail. Conclusions Nine of the twelve participating algorithms pass precision requirements similar to what is indicated in the QIBA Profile, with the caveat that the current study was not designed to explicitly evaluate algorithm Profile conformance. Change in tumor volume can be measured with confidence to within ±14% using any of these nine algorithms on tumor sizes above 10 mm. No partition of the algorithms were able to meet the QIBA requirements for interchangeability down to 10 mm, though the partition comprised of the best performing algorithms did meet this requirement above a tumor size of approximately 40 mm. PMID:26376841
Grid adaption based on modified anisotropic diffusion equations formulated in the parametic domain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagmeijer, R.
1994-11-01
A new grid-adaption algorithm for problems in computational fluid dynamics is presented. The basic equations are derived from a variational problem formulated in the parametric domain of the mapping that defines the existing grid. Modification of the basic equations provides desirable properties in boundary layers. The resulting modified anisotropic diffusion equations are solved for the computational coordinates as functions of the parametric coordinates and these functions are numerically inverted. Numerical examples show that the algorithm is robust, that shocks and boundary layers are well-resolved on the adapted grid, and that the flow solution becomes a globally smooth function of themore » computational coordinates.« less
Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.
Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen
2014-01-01
Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.
Machine Learning Methods for Attack Detection in the Smart Grid.
Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent
2016-08-01
Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.
Low-complexity R-peak detection for ambulatory fetal monitoring.
Rooijakkers, Michael J; Rabotti, Chiara; Oei, S Guid; Mischi, Massimo
2012-07-01
Non-invasive fetal health monitoring during pregnancy is becoming increasingly important because of the increasing number of high-risk pregnancies. Despite recent advances in signal-processing technology, which have enabled fetal monitoring during pregnancy using abdominal electrocardiogram (ECG) recordings, ubiquitous fetal health monitoring is still unfeasible due to the computational complexity of noise-robust solutions. In this paper, an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, without reducing the R-peak detection performance compared to the existing R-peak detection schemes. Validation of the algorithm is performed on three manually annotated datasets. With a detection error rate of 0.23%, 1.32% and 9.42% on the MIT/BIH Arrhythmia and in-house maternal and fetal databases, respectively, the detection rate of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.
A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun
2014-11-01
In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.
A new real-time tsunami detection algorithm
NASA Astrophysics Data System (ADS)
Chierici, F.; Embriaco, D.; Pignagnoli, L.
2016-12-01
Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection based on the real-time tide removal and real-time band-pass filtering of sea-bed pressure recordings. The algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability, at low computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. Pressure data sets acquired by Bottom Pressure Recorders in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event which occurred at Haida Gwaii on October 28th, 2012 using data recorded by the Bullseye underwater node of Ocean Networks Canada. The algorithm successfully ran for test purpose in year-long missions onboard the GEOSTAR stand-alone multidisciplinary abyssal observatory, deployed in the Gulf of Cadiz during the EC project NEAREST and on NEMO-SN1 cabled observatory deployed in the Western Ionian Sea, operational node of the European research infrastructure EMSO.
Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P
2010-10-30
Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.
Automatic segmentation of multimodal brain tumor images based on classification of super-voxels.
Kadkhodaei, M; Samavi, S; Karimi, N; Mohaghegh, H; Soroushmehr, S M R; Ward, K; All, A; Najarian, K
2016-08-01
Despite the rapid growth in brain tumor segmentation approaches, there are still many challenges in this field. Automatic segmentation of brain images has a critical role in decreasing the burden of manual labeling and increasing robustness of brain tumor diagnosis. We consider segmentation of glioma tumors, which have a wide variation in size, shape and appearance properties. In this paper images are enhanced and normalized to same scale in a preprocessing step. The enhanced images are then segmented based on their intensities using 3D super-voxels. Usually in images a tumor region can be regarded as a salient object. Inspired by this observation, we propose a new feature which uses a saliency detection algorithm. An edge-aware filtering technique is employed to align edges of the original image to the saliency map which enhances the boundaries of the tumor. Then, for classification of tumors in brain images, a set of robust texture features are extracted from super-voxels. Experimental results indicate that our proposed method outperforms a comparable state-of-the-art algorithm in term of dice score.
2018-01-01
ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a
Fast localization of optic disc and fovea in retinal images for eye disease screening
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Echegaray, S.; Pattichis, M.; Zamora, G.; Bauman, W.; Soliz, P.
2011-03-01
Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and "gold standard" is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.
Improved obstacle avoidance and navigation for an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Giri, Binod; Cho, Hyunsu; Williams, Benjamin C.; Tann, Hokchhay; Shakya, Bicky; Bharam, Vishal; Ahlgren, David J.
2015-01-01
This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 Intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.