Science.gov

Sample records for algorithm successfully detected

  1. An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor

    PubMed Central

    Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei

    2016-01-01

    Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy. PMID:26761015

  2. An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor.

    PubMed

    Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei

    2016-01-01

    Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy. PMID:26761015

  3. Measuring the success of video segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Power, Gregory J.

    2001-12-01

    Appropriate segmentation of video is a key step for applications such as video surveillance, video composing, video compression, storage and retrieval, and automated target recognition. Video segmentation algorithms involve dissecting the video into scenes based on shot boundaries as well as local objects and events based on spatial shape and regional motions. Many algorithmic approaches to video segmentation have been recently reported, but many lack measures to quantify the success of the segmentation especially in comparison to other algorithms. This paper suggests multiple bench-top measures for evaluating video segmentation. The paper suggests that the measures are most useful when 'truth' data about the video is available such as precise frame-by- frame object shape. When precise 'truth' data is unavailable, this paper suggests using hand-segmented 'truth' data to measure the success of the video segmentation. Thereby, the ability of the video segmentation algorithm to achieve the same quality of segmentation as the human is obtained in the form of a variance in multiple measures. The paper introduces a suite of measures, each scaled from zero to one. A score of one on a particular measure is a perfect score for a singular segmentation measure. Measures are introduced to evaluate the ability of a segmentation algorithm to correctly detect shot boundaries, to correctly determine spatial shape and to correctly determine temporal shape. The usefulness of the measures are demonstrated on a simple segmenter designed to detect and segment a ping pong ball from a table tennis image sequence.

  4. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  5. GPU Accelerated Event Detection Algorithm

    SciTech Connect

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but also model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.

  6. Global Optimality of the Successive Maxbet Algorithm.

    ERIC Educational Resources Information Center

    Hanafi, Mohamed; ten Berge, Jos M. F.

    2003-01-01

    It is known that the Maxbet algorithm, which is an alternative to the method of generalized canonical correlation analysis and Procrustes analysis, may converge to local maxima. Discusses an eigenvalue criterion that is sufficient, but not necessary, for global optimality of the successive Maxbet algorithm. (SLD)

  7. Audio detection algorithms

    NASA Astrophysics Data System (ADS)

    Neta, B.; Mansager, B.

    1992-08-01

    Audio information concerning targets generally includes direction, frequencies, and energy levels. One use of audio cueing is to use direction information to help determine where more sensitive visual direction and acquisition sensors should be directed. Generally, use of audio cueing will shorten times required for visual detection, although there could be circumstances where the audio information is misleading and degrades visual performance. Audio signatures can also be useful for helping classify the emanating platform, as well as to provide estimates of its velocity. The Janus combat simulation is the premier high resolution model used by the Army and other agencies to conduct research. This model has a visual detection model which essentially incorporates algorithms as described by Hartman(1985). The model in its current form does not have any sound cueing capability. This report is part of a research effort to investigate the utility of developing such a capability.

  8. A universal symmetry detection algorithm.

    PubMed

    Maurer, Peter M

    2015-01-01

    Research on symmetry detection focuses on identifying and detecting new types of symmetry. The paper presents an algorithm that is capable of detecting any type of permutation-based symmetry, including many types for which there are no existing algorithms. General symmetry detection is library-based, but symmetries that can be parameterized, (i.e. total, partial, rotational, and dihedral symmetry), can be detected without using libraries. In many cases it is faster than existing techniques. Furthermore, it is simpler than most existing techniques, and can easily be incorporated into existing software. The algorithm can also be used with virtually any type of matrix-based symmetry, including conjugate symmetry.

  9. A swaying object detection algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Shidong; Rong, Jianzhong; Zhou, Dechuang; Wang, Jian

    2013-07-01

    Moving object detection is a most important preliminary step in video analysis. Some moving objects such as spitting steam, fire and smoke have unique motion feature whose lower position keep basically unchanged and the upper position move back and forth. Based on this unique motion feature, a swaying object detection algorithm is presented in this paper. Firstly, fuzzy integral was adopted to integrate color features for extracting moving objects from video frames. Secondly, a swaying identification algorithm based on centroid calculation was used to distinguish the swaying object from other moving objects. Experiments show that the proposed method is effective to detect swaying object.

  10. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  11. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  12. Differential Search Algorithm Based Edge Detection

    NASA Astrophysics Data System (ADS)

    Gunen, M. A.; Civicioglu, P.; Beşdok, E.

    2016-06-01

    In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.

  13. Wire Detection Algorithms for Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.

    2002-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning

  14. A Vehicle Detection Algorithm Based on Deep Belief Network

    PubMed Central

    Cai, Yingfeng; Chen, Long

    2014-01-01

    Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN) is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets. PMID:24959617

  15. Is there a best hyperspectral detection algorithm?

    NASA Astrophysics Data System (ADS)

    Manolakis, D.; Lockwood, R.; Cooley, T.; Jacobson, J.

    2009-05-01

    A large number of hyperspectral detection algorithms have been developed and used over the last two decades. Some algorithms are based on highly sophisticated mathematical models and methods; others are derived using intuition and simple geometrical concepts. The purpose of this paper is threefold. First, we discuss the key issues involved in the design and evaluation of detection algorithms for hyperspectral imaging data. Second, we present a critical review of existing detection algorithms for practical hyperspectral imaging applications. Finally, we argue that the "apparent" superiority of sophisticated algorithms with simulated data or in laboratory conditions, does not necessarily translate to superiority in real-world applications.

  16. Orbital objects detection algorithm using faint streaks

    NASA Astrophysics Data System (ADS)

    Tagawa, Makoto; Yanagisawa, Toshifumi; Kurosaki, Hirohisa; Oda, Hiroshi; Hanada, Toshiya

    2016-02-01

    This study proposes an algorithm to detect orbital objects that are small or moving at high apparent velocities from optical images by utilizing their faint streaks. In the conventional object-detection algorithm, a high signal-to-noise-ratio (e.g., 3 or more) is required, whereas in our proposed algorithm, the signals are summed along the streak direction to improve object-detection sensitivity. Lower signal-to-noise ratio objects were detected by applying the algorithm to a time series of images. The algorithm comprises the following steps: (1) image skewing, (2) image compression along the vertical axis, (3) detection and determination of streak position, (4) searching for object candidates using the time-series streak-position data, and (5) selecting the candidate with the best linearity and reliability. Our algorithm's ability to detect streaks with signals weaker than the background noise was confirmed using images from the Australia Remote Observatory.

  17. An Intrusion Detection Algorithm Based On NFPA

    NASA Astrophysics Data System (ADS)

    Anming, Zhong

    A process oriented intrusion detection algorithm based on Probabilistic Automaton with No Final probabilities (NFPA) is introduced, system call sequence of process is used as the source data. By using information in system call sequence of normal process and system call sequence of anomaly process, the anomaly detection and the misuse detection are efficiently combined. Experiments show better performance of our algorithm compared to the classical algorithm in this field.

  18. Genetic optimization of the HSTAMIDS landmine detection algorithm

    NASA Astrophysics Data System (ADS)

    Konduri, Ravi K.; Solomon, Geoff Z.; DeJong, Keith; Duvoisin, Herbert A.; Bartosz, Elizabeth E.

    2004-09-01

    CyTerra's dual sensor HSTAMIDS system has demonstrated exceptional landmine detection capabilities in extensive government-run field tests. Further optimization of the highly successful PentAD-class algorithms for Humanitarian Demining (HD) use (to enhance detection (Pd) and to lower the false alarm rate (FAR)) may be possible. PentAD contains several input parameters, making such optimization computationally intensive. Genetic algorithm techniques, which formerly provided substantial improvement in the detection performance of the metal detector sensor algorithm alone, have been applied to optimize the numerical values of the dual-sensor algorithm parameters. Genetic algorithm techniques have also been applied to choose among several sub-models and fusion techniques to potentially train the HSTAMIDS HD system in new ways. In this presentation we discuss the performance of the resulting algorithm as applied to field data.

  19. An efficient parallel termination detection algorithm

    SciTech Connect

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.

  20. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  1. CFAR detection algorithm for acoustic-seismic landmine detection

    NASA Astrophysics Data System (ADS)

    Matalkah, Ghaith M.; Matalgah, Mustafa M.; Sabatier, James M.

    2007-04-01

    Automating the detection process in acoustic-seismic landmine detection speeds up the detection process and eliminates the need for a human operator in the minefield. Previous automatic detection algorithms for acoustic landmine detection showed excellent results for detecting landmines in various environments. However, these algorithms use environment-specific noise-removal procedures that rely on training sets acquired over mine-free areas. In this work, we derive a new detection algorithm that adapts to varying conditions and employs environment-independent techniques. The algorithm is based on the generalized likelihood ratio (GLR) test and asymptotically achieves a constant false alarm rate (CFAR). The algorithm processes the magnitude and phase of the vibrational velocity and shows satisfying results of detecting landmines in gravel and dirt lanes.

  2. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  3. CCD Detects Two Images In Quick Succession

    NASA Technical Reports Server (NTRS)

    Janesick, James R.; Collins, Andy

    1996-01-01

    Prototype special-purpose charge-coupled device (CCD) designed to detect two 1,024 x 1,024-pixel images in rapid succession. Readout performed slowly to minimize noise. CCD operated in synchronism with pulsed laser, stroboscope, or other pulsed source of light to form pairs of images of rapidly moving objects.

  4. Edge detection applied to SST fields. [front detection algorithm for Sea Surface Temperature

    NASA Technical Reports Server (NTRS)

    Cayula, Jean-Francois; Cornillon, Peter

    1990-01-01

    An algorithm designed to detect fronts automatically in satellite-derived sea-surface temperature (SST) fields is presented. The algorithm is operated at different levels to detect and differentiate between false and true edges. For purposes of comparison, the algorithm is applied to a test set of 98 SST images to detect the northern edge of the Gulf Stream. The algorithm successfully detected valid temperature fronts and ignored false edges, and also produced statistics about the temperature fronts that are useful in the subsequent analysis of these fronts. It is assumed that the algorithm performs equally well on other SST fronts such as those associated with rings, the subtropical convergence, or the shelf/slope fronts.

  5. Performance analysis of cone detection algorithms.

    PubMed

    Mariotti, Letizia; Devaney, Nicholas

    2015-04-01

    Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758

  6. Negative Selection Algorithm for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.

  7. Lightning detection and exposure algorithms for smartphones

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Shao, Xiaopeng; Wang, Lin; Su, Laili; Huang, Yining

    2015-05-01

    This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.

  8. Staff line detection and revision algorithm based on subsection projection and correlation algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yin-xian; Yang, Ding-li

    2013-03-01

    Staff line detection plays a key role in OMR technology, and is the precon-ditions of subsequent segmentation 1& recognition of music sheets. For the phenomena of horizontal inclination & curvature of staff lines and vertical inclination of image, which often occur in music scores, an improved approach based on subsection projection is put forward to realize the detection of original staff lines and revision in an effect to implement staff line detection more successfully. Experimental results show the presented algorithm can detect and revise staff lines fast and effectively.

  9. Obstacle Detection Algorithms for Rotorcraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.; Huang, Ying; Narasimhamurthy, Anand; Pande, Nitin; Ahumada, Albert (Technical Monitor)

    2001-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.

  10. Detecting Danger: The Dendritic Cell Algorithm

    NASA Astrophysics Data System (ADS)

    Greensmith, Julie; Aickelin, Uwe; Cayzer, Steve

    The "Dendritic Cell Algorithm" (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, an abstract model of dendritic cell (DC) behavior is developed and subsequently used to form an algorithm—the DCA. The abstraction process was facilitated through close collaboration with laboratory-based immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population-based algorithm, with each agent in the system represented as an "artificial DC". Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter, the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of port scans and botnets, where it has produced impressive results with relatively low rates of false positives.

  11. Rare Event Detection Algorithm Of Water Quality

    NASA Astrophysics Data System (ADS)

    Ungs, M. J.

    2011-12-01

    A novel method is presented describing the development and implementation of an on-line water quality event detection algorithm. An algorithm was developed to distinguish between normal variation in water quality parameters and changes in these parameters triggered by the presence of contaminant spikes. Emphasis is placed on simultaneously limiting the number of false alarms (which are called false positives) that occur and the number of misses (called false negatives). The problem of excessive false alarms is common to existing change detection algorithms. EPA's standard measure of evaluation for event detection algorithms is to have a false alarm rate of less than 0.5 percent and a false positive rate less than 2 percent (EPA 817-R-07-002). A detailed description of the algorithm's development is presented. The algorithm is tested using historical water quality data collected by a public water supply agency at multiple locations and using spiking contaminants developed by the USEPA, Water Security Division. The water quality parameters of specific conductivity, chlorine residual, total organic carbon, pH, and oxidation reduction potential are considered. Abnormal data sets are generated by superimposing water quality changes on the historical or baseline data. Eddies-ET has defined reaction expressions which specify how the peak or spike concentration of a particular contaminant affects each water quality parameter. Nine default contaminants (Eddies-ET) were previously derived from pipe-loop tests performed at EPA's National Homeland Security Research Center (NHSRC) Test and Evaluation (T&E) Facility. A contaminant strength value of approximately 1.5 is considered to be a significant threat. The proposed algorithm has been able to achieve a combined false alarm rate of less than 0.03 percent for both false positives and for false negatives using contaminant spikes of strength 2 or more.

  12. A collision detection algorithm for telerobotic arms

    NASA Technical Reports Server (NTRS)

    Tran, Doan Minh; Bartholomew, Maureen Obrien

    1991-01-01

    The telerobotic manipulator's collision detection algorithm is described. Its applied structural model of the world environment and template representation of objects is evaluated. Functional issues that are required for the manipulator to operate in a more complex and realistic environment are discussed.

  13. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  14. Optimal design of link systems using successive zooming genetic algorithm

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Doo; Sohn, Chang-hyun; Kwon, Soon-Bum; Lim, Jae-gyoo

    2009-07-01

    Link-systems have been around for a long time and are still used to control motion in diverse applications such as automobiles, robots and industrial machinery. This study presents a procedure involving the use of a genetic algorithm for the optimal design of single four-bar link systems and a double four-bar link system used in diesel engine. We adopted the Successive Zooming Genetic Algorithm (SZGA), which has one of the most rapid convergence rates among global search algorithms. The results are verified by experiment and the Recurdyn dynamic motion analysis package. During the optimal design of single four-bar link systems, we found in the case of identical input/output (IO) angles that the initial and final configurations show certain symmetry. For the double link system, we introduced weighting factors for the multi-objective functions, which minimize the difference between output angles, providing balanced engine performance, as well as the difference between final output angle and the desired magnitudes of final output angle. We adopted a graphical method to select a proper ratio between the weighting factors.

  15. On Dijkstra's Algorithm for Deadlock Detection

    NASA Astrophysics Data System (ADS)

    Li, Youming; Greca, Ardian; Harris, James

    We study a classical problem in operating systems concerning deadlock detection for systems with reusable resources. The elegant Dijkstra's algorithm utilizes simple data structures, but it has the cost of quadratic dependence on the number of the processes. Our goal is to reduce the cost in an optimal way without losing the simplicity of the data structures. More specifically, we present a graph-free and almost optimal algorithm with the cost of linear dependence on the number of the processes, when the number of resources is fixed and when the units of requests for resources are bounded by constants.

  16. Detection of Cheating by Decimation Algorithm

    NASA Astrophysics Data System (ADS)

    Yamanaka, Shogo; Ohzeki, Masayuki; Decelle, Aurélien

    2015-02-01

    We expand the item response theory to study the case of "cheating students" for a set of exams, trying to detect them by applying a greedy algorithm of inference. This extended model is closely related to the Boltzmann machine learning. In this paper we aim to infer the correct biases and interactions of our model by considering a relatively small number of sets of training data. Nevertheless, the greedy algorithm that we employed in the present study exhibits good performance with a few number of training data. The key point is the sparseness of the interactions in our problem in the context of the Boltzmann machine learning: the existence of cheating students is expected to be very rare (possibly even in real world). We compare a standard approach to infer the sparse interactions in the Boltzmann machine learning to our greedy algorithm and we find the latter to be superior in several aspects.

  17. A novel algorithm for notch detection

    NASA Astrophysics Data System (ADS)

    Acosta, C.; Salazar, D.; Morales, D.

    2013-06-01

    It is common knowledge that DFM guidelines require revisions to design data. These guidelines impose the need for corrections inserted into areas within the design data flow. At times, this requires rather drastic modifications to the data, both during the layer derivation or DRC phase, and especially within the RET phase. For example, OPC. During such data transformations, several polygon geometry changes are introduced, which can substantially increase shot count, geometry complexity, and eventually conversion to mask writer machine formats. In this resulting complex data, it may happen that notches are found that do not significantly contribute to the final manufacturing results, but do in fact contribute to the complexity of the surrounding geometry, and are therefore undesirable. Additionally, there are cases in which the overall figure count can be reduced with minimum impact in the quality of the corrected data, if notches are detected and corrected. Case in point, there are other cases where data quality could be improved if specific valley notches are filled in, or peak notches are cut out. Such cases generally satisfy specific geometrical restrictions in order to be valid candidates for notch correction. Traditional notch detection has been done for rectilinear data (Manhattan-style) and only in axis-parallel directions. The traditional approaches employ dimensional measurement algorithms that measure edge distances along the outside of polygons. These approaches are in general adaptations, and therefore ill-fitted for generalized detection of notches with strange shapes and in strange rotations. This paper covers a novel algorithm developed for the CATS MRCC tool that finds both valley and/or peak notches that are candidates for removal. The algorithm is generalized and invariant to data rotation, so that it can find notches in data rotated in any angle. It includes parameters to control the dimensions of detected notches, as well as algorithm tolerances

  18. Parallelization of Edge Detection Algorithm using MPI on Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Haron, Nazleeni; Amir, Ruzaini; Aziz, Izzatdin A.; Jung, Low Tan; Shukri, Siti Rohkmah

    In this paper, we present the design of parallel Sobel edge detection algorithm using Foster's methodology. The parallel algorithm is implemented using MPI message passing library and master/slave algorithm. Every processor performs the same sequential algorithm but on different part of the image. Experimental results conducted on Beowulf cluster are presented to demonstrate the performance of the parallel algorithm.

  19. Network Algorithms for Detection of Radiation Sources

    SciTech Connect

    Rao, Nageswara S; Brooks, Richard R; Wu, Qishi

    2014-01-01

    In support of national defense, Domestic Nuclear Detection Office s (DNDO) Intelligent Radiation Sensor Systems (IRSS) program supported the development of networks of radiation counters for detecting, localizing and identifying low-level, hazardous radiation sources. Industry teams developed the first generation of such networks with tens of counters, and demonstrated several of their capabilities in indoor and outdoor characterization tests. Subsequently, these test measurements have been used in algorithm replays using various sub-networks of counters. Test measurements combined with algorithm outputs are used to extract Key Measurements and Benchmark (KMB) datasets. We present two selective analyses of these datasets: (a) a notional border monitoring scenario that highlights the benefits of a network of counters compared to individual detectors, and (b) new insights into the Sequential Probability Ratio Test (SPRT) detection method, which lead to its adaptations for improved detection. Using KMB datasets from an outdoor test, we construct a notional border monitoring scenario, wherein twelve 2 *2 NaI detectors are deployed on the periphery of 21*21meter square region. A Cs-137 (175 uCi) source is moved across this region, starting several meters from outside and finally moving away. The measurements from individual counters and the network were processed using replays of a particle filter algorithm developed under IRSS program. The algorithm outputs from KMB datasets clearly illustrate the benefits of combining measurements from all networked counters: the source was detected before it entered the region, during its trajectory inside, and until it moved several meters away. When individual counters are used for detection, the source was detected for much shorter durations, and sometimes was missed in the interior region. The application of SPRT for detecting radiation sources requires choosing the detection threshold, which in turn requires a source strength

  20. Water quality change detection: multivariate algorithms

    NASA Astrophysics Data System (ADS)

    Klise, Katherine A.; McKenna, Sean A.

    2006-05-01

    In light of growing concern over the safety and security of our nation's drinking water, increased attention has been focused on advanced monitoring of water distribution systems. The key to these advanced monitoring systems lies in the combination of real time data and robust statistical analysis. Currently available data streams from sensors provide near real time information on water quality. Combining these data streams with change detection algorithms, this project aims to develop automated monitoring techniques that will classify real time data and denote anomalous water types. Here, water quality data in 1 hour increments over 3000 hours at 4 locations are used to test multivariate algorithms to detect anomalous water quality events. The algorithms use all available water quality sensors to measure deviation from expected water quality. Simulated anomalous water quality events are added to the measured data to test three approaches to measure this deviation. These approaches include multivariate distance measures to 1) the previous observation, 2) the closest observation in multivariate space, and 3) the closest cluster of previous water quality observations. Clusters are established using kmeans classification. Each approach uses a moving window of previous water quality measurements to classify the current measurement as normal or anomalous. Receiver Operating Characteristic (ROC) curves test the ability of each approach to discriminate between normal and anomalous water quality using a variety of thresholds and simulated anomalous events. These analyses result in a better understanding of the deviation from normal water quality that is necessary to sound an alarm.

  1. An improved algorithm for wildfire detection

    NASA Astrophysics Data System (ADS)

    Nakau, K.

    2010-12-01

    Satellite information of wild fire location has strong demands from society. Therefore, Understanding such demands is quite important to consider what to improve the wild fire detection algorithm. Interviews and considerations imply that the most important improvements are geographical resolution of the wildfire product and classification of fire; smoldering or flaming. Discussion with fire service agencies are performed with fire service agencies in Alaska and fire service volunteer groups in Indonesia. Alaska Fire Service (AFS) makes 3D-map overlaid by fire location every morning. Then, this 3D-map is examined by leaders of fire service teams to decide their strategy to fighting against wild fire. Especially, firefighters of both agencies seek the best walk path to approach the fire. Because of mountainous landscape, geospatial resolution is quite important for them. For example, walking in bush for 1km, as same as one pixel of fire product, is very tough for firefighters. Also, in case of remote wild fire, fire service agencies utilize satellite information to decide when to have a flight observation to confirm the status; expanding, flaming, smoldering or out. Therefore, it is also quite important to provide the classification of fire; flaming or smoldering. Not only the aspect of disaster management, wildfire emits huge amount of carbon into atmosphere as much as one quarter to one half of CO2 by fuel combustion (IPCC AR4). Reduction of the CO2 emission by human caused wildfire is important. To estimate carbon emission from wildfire, special resolution is quite important. To improve sensitivity of wild fire detection, author adopts radiance based wildfire detection. Different from the existing brightness temperature approach, we can easily consider reflectance of background land coverage. Especially for GCOM-C1/SGLI, band to detect fire with 250m resolution is 1.6μm wavelength. In this band, we have much more sunlight reflection. Therefore, we need to

  2. Evaluation of hybrids algorithms for mass detection in digitalized mammograms

    NASA Astrophysics Data System (ADS)

    Cordero, José; Garzón Reyes, Johnson

    2011-01-01

    The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.

  3. Photon Counting Using Edge-Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Gin, Jonathan W.; Nguyen, Danh H.; Farr, William H.

    2010-01-01

    New applications such as high-datarate, photon-starved, free-space optical communications require photon counting at flux rates into gigaphoton-per-second regimes coupled with subnanosecond timing accuracy. Current single-photon detectors that are capable of handling such operating conditions are designed in an array format and produce output pulses that span multiple sample times. In order to discern one pulse from another and not to overcount the number of incoming photons, a detection algorithm must be applied to the sampled detector output pulses. As flux rates increase, the ability to implement such a detection algorithm becomes difficult within a digital processor that may reside within a field-programmable gate array (FPGA). Systems have been developed and implemented to both characterize gigahertz bandwidth single-photon detectors, as well as process photon count signals at rates into gigaphotons per second in order to implement communications links at SCPPM (serial concatenated pulse position modulation) encoded data rates exceeding 100 megabits per second with efficiencies greater than two bits per detected photon. A hardware edge-detection algorithm and corresponding signal combining and deserialization hardware were developed to meet these requirements at sample rates up to 10 GHz. The photon discriminator deserializer hardware board accepts four inputs, which allows for the ability to take inputs from a quadphoton counting detector, to support requirements for optical tracking with a reduced number of hardware components. The four inputs are hardware leading-edge detected independently. After leading-edge detection, the resultant samples are ORed together prior to deserialization. The deserialization is performed to reduce the rate at which data is passed to a digital signal processor, perhaps residing within an FPGA. The hardware implements four separate analog inputs that are connected through RF connectors. Each analog input is fed to a high-speed 1

  4. Comparison of hyperspectral change detection algorithms

    NASA Astrophysics Data System (ADS)

    Pieper, M.; Manolakis, D.; Truslow, E.; Cooley, T.; Brueggeman, M.; Weisner, A.; Jacobson, J.

    2015-09-01

    There are a multitude of civilian and military applications for the detection of anomalous changes in hyper-spectral images. Anomalous changes occur when the material within a pixel is replaced. Environmental factors that change over time, such as illumination, will affect the radiance of all the pixels in a scene, despite the materials within remaining constant. The goal of an anomalous change detection algorithm is to suppress changes caused by the environment, and detect pixels where the materials within have changed. Anomalous change detection is a two step process. Two co-registered images of a scene are first transformed to maximize the overall correlation between the images, then an anomalous change detector (ACD) is applied to the transformed images. The transforms maximize the correlation between the two images to attenuate the environmental differences that distract from the anomalous changes of importance. Several categories of transforms with different optimization parameters are discussed and compared. One of two types of ACDs are then applied to the transformed images. The first ACD uses the difference of the two transformed images. The second concatenates the spectra of two images and uses an aggregated ACD. A comparison of the two ACD methods and their effectiveness with the different transforms is done for the first time.

  5. Climate Data Homogenization Using Edge Detection Algorithms

    NASA Astrophysics Data System (ADS)

    Hammann, A. C.; Rennermalm, A. K.

    2015-12-01

    The problem of climate data homogenization has predominantly been addressed by testing the likelihood of one or more breaks inserted into a given time series and modeling the mean to be stationary in between the breaks. We recast the same problem in a slightly different form: that of detecting step-like changes in noisy data, and observe that this problem has spawned a large number of approaches to its solution as the "edge detection" problem in image processing. With respect to climate data, we ask the question: How can we optimally separate step-like from smoothly-varying low-frequency signals? We study the hypothesis that the edge-detection approach makes better use of all information contained in the time series than the "traditional" approach (e.g. Caussinus and Mestre, 2004), which we base on several observations. 1) The traditional formulation of the problem reduces the available information from the outset to that contained in the test statistic. 2) The criterion of local steepness of the low-frequency variability, while at least hypothetically useful, is ignored. 3) The practice of using monthly data corresponds, mathematically, to applying a moving average filter (to reduce noise) and subsequent subsampling of the result; this subsampling reduces the amount of available information beyond what is necessary for noise reduction. Most importantly, the tradeoff between noise reduction (better with filters with wide support in the time domain) and localization of detected changes (better with filters with narrow support) is expressed in the well-known uncertainty principle and can be addressed optimally within a time-frequency framework. Unsurprisingly, a large number of edge-detection algorithms have been proposed that make use of wavelet decompositions and similar techniques. We are developing this framework in part to be applied to a particular set of climate data from Greenland; we will present results from this application as well as from tests with

  6. Boundary-detection algorithm for locating edges in digital imagery

    NASA Technical Reports Server (NTRS)

    Myers, V. I. (Principal Investigator); Russell, M. J.; Moore, D. G.; Nelson, G. D.

    1975-01-01

    The author has identified the following significant results. Initial development of a computer program which implements a boundary detection algorithm to detect edges in digital images is described. An evaluation of the boundary detection algorithm was conducted to locate boundaries of lakes from LANDSAT-1 imagery. The accuracy of the boundary detection algorithm was determined by comparing the area within boundaries of lakes located using digitized LANDSAT imagery with the area of the same lakes planimetered from imagery collected from an aircraft platform.

  7. Obstacle Detection Algorithms for Aircraft Navigation: Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.

  8. A Community Detection Algorithm Based on Topology Potential and Spectral Clustering

    PubMed Central

    Wang, Zhixiao; Chen, Zhaotong; Zhao, Ya; Chen, Shaoda

    2014-01-01

    Community detection is of great value for complex networks in understanding their inherent law and predicting their behavior. Spectral clustering algorithms have been successfully applied in community detection. This kind of methods has two inadequacies: one is that the input matrixes they used cannot provide sufficient structural information for community detection and the other is that they cannot necessarily derive the proper community number from the ladder distribution of eigenvector elements. In order to solve these problems, this paper puts forward a novel community detection algorithm based on topology potential and spectral clustering. The new algorithm constructs the normalized Laplacian matrix with nodes' topology potential, which contains rich structural information of the network. In addition, the new algorithm can automatically get the optimal community number from the local maximum potential nodes. Experiments results showed that the new algorithm gave excellent performance on artificial networks and real world networks and outperforms other community detection methods. PMID:25147846

  9. Evaluation of RADAP II Severe-Storm-Detection Algorithms.

    NASA Astrophysics Data System (ADS)

    Winston, Herb A.; Ruthi, Larry J.

    1986-02-01

    Computer-generated volumetric radar algorithms have been available at a few operational National Weather Service sites since the mid-1970s under the Digitized Radar Experiment (D/RADFX) and Radar Data Processor (RADAP II) programs. The algorithms were first used extensively for severe-storm warnings at the Oklahoma City National Weather Service Forecast Office (WSFO OKC) in 1983. RADAP IT performance in operational severe-weather forecasting was evaluated using objectively derived warnings based on computer-generated output. Statistical scores of probability of detection, false-alarm rate, and critical-success index for the objective warnings were found to be significantly higher than the average statistical scares reported for National Weather Service warnings. Even higher statistical scores were achieved by experienced forecasters using RADAP II in addition to conventional data during the 1983 severe-storm season at WSFO OKC. This investigation lends further support to the suggestion that incorporating improved reflectivity-based algorithms with Doppler into the future Advanced Weather Interactive Processing System for the 1990s (AWIPS-90) or the Next Generation Weather Radar (NEXRAD) system should greatly enhance severe-storm-detection capabilities.

  10. An effective algorithm for radar dim moving target detection

    NASA Astrophysics Data System (ADS)

    Luo, Qian; Wang, Yanfei

    2009-10-01

    The detection and tracking of dim moving targets in very low signal-to-noise ratio (SNR) environment has been a difficult problem in radar signal processing. For low SNR moving targets detection, a new improved dynamic programming algorithm based on track-before-detection method is presented. This new algorithm integrates energy along target moving tracks according to target moving parameter information. This process substitutes the exhaustive search by a feasible algorithm. The simulation confirms that this algorithm, with high computational efficiency, is feasible, and can effectively estimate trajectories of dim closing moving targets. The process has also been shown to give an increase in detection.

  11. Dual-Byte-Marker Algorithm for Detecting JFIF Header

    NASA Astrophysics Data System (ADS)

    Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat

    The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.

  12. Development of a novel constellation based landmark detection algorithm

    NASA Astrophysics Data System (ADS)

    Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.

    2013-03-01

    Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.

  13. Lane detection algorithm for an onboard camera

    NASA Astrophysics Data System (ADS)

    Bellino, Mario; Lopez de Meneses, Yuri; Ryser, Peter; Jacot, Jacques

    2005-02-01

    After analysing the major causes of injuries and death on roads, it is understandable that one of the main goals in the automotive industry is to increase vehicle safety. The European project SPARC (Secure Propulsion using Advanced Redundant Control) is developing the next generation of trucks that will fulfil these aims. The main technologies that will be used in the SPARC project to achieve the desiderated level of safety will be presented. In order to avoid accidents in critical situations, it is necessary to have a representation of the environment of the vehicle. Thus, several solutions using different sensors will be described and analysed. Particularly, a division of this project aims to integrate cameras in automotive vehicles to increase security and prevent driver's mistakes. Indeed, with this vision platform it would be possible to extract the position of the lane with respect to the vehicle, and thus, help the driver to follow the optimal trajectory. A definition of lane is proposed, and a lane detection algorithm is presented. In order to improve the detection, several criteria are explained and detailed. Regrettably, such an embedded camera is subject to the vibration of the truck, and the resulting sequence of images is difficult to analyse. Thus, we present different solutions to stabilize the images and particularly a new approach developed by the "Laboratoire de Production Microtechnique". Indeed, it was demonstrated in previous works that the presence of noise can be used, through a phenomenon called Stochastic Resonance. Thus, instead of decreasing the influence of noise in industrial applications, which has non negligible costs, it is perhaps interesting to use this phenomenon to reveal some useful information, such as for example the contour of the objects and lanes.

  14. Optimal multisensor decision fusion of mine detection algorithms

    NASA Astrophysics Data System (ADS)

    Liao, Yuwei; Nolte, Loren W.; Collins, Leslie M.

    2003-09-01

    Numerous detection algorithms, using various sensor modalities, have been developed for the detection of mines in cluttered and noisy backgrounds. The performance for each detection algorithm is typically reported in terms of the Receiver Operating Characteristic (ROC), which is a plot of the probability of detection versus false alarm as a function of the threshold setting on the output decision variable of each algorithm. In this paper we present multi-sensor decision fusion algorithms that combine the local decisions of existing detection algorithms for different sensors. This offers, in certain situations, an expedient, attractive and much simpler alternative to "starting over" with the redesign of a new algorithm which fuses multiple sensors at the data level. The goal in our multi-sensor decision fusion approach is to exploit complimentary strengths of existing multi-sensor algorithms so as to achieve performance (ROC) that exceeds the performance of any sensor algorithm operating in isolation. Our approach to multi-sensor decision fusion is based on optimal signal detection theory, using the likelihood ratio. We consider the optimal fusion of local decisions for two sensors, GPR (ground penetrating radar) and MD (metal detector). A new robust algorithm for decision fusion is presented that addresses the problem that the statistics of the training data is not likely to exactly match the statistics of the test data. ROC's are presented and compared for real data.

  15. A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Munoz, Cesar

    2015-01-01

    In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.

  16. Dynamic programming algorithm for detecting dim infrared moving targets

    NASA Astrophysics Data System (ADS)

    He, Lisha; Mao, Liangjing; Xie, Lijun

    2009-10-01

    Infrared (IR) target detection is a key part of airborne infrared weapon system, especially the detection of poor dim moving IR target embedded in complex context. This paper presents an improved Dynamic Programming (DP) algorithm in allusion to low Signal to Noise Ratio (SNR) infrared dim moving targets under cluttered context. The algorithm brings the dim target to prominence by accumulating the energy of pixels in the image sequence, after suppressing the background noise with a mathematical morphology preprocessor. As considering the continuity and stabilization of target's energy and forward direction, this algorithm has well solved the energy scattering problem that exists in the original DP algorithm. An effective energy segmentation threshold is given by a Contrast-Limited Adaptive Histogram Equalization (CLAHE) filter with a regional peak extraction algorithm. Simulation results show that the improved DP tracking algorithm performs well in detecting poor dim targets.

  17. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  18. Improvement and implementation for Canny edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  19. An Improved QRS Wave Group Detection Algorithm and Matlab Implementation

    NASA Astrophysics Data System (ADS)

    Zhang, Hongjun

    This paper presents an algorithm using Matlab software to detect QRS wave group of MIT-BIH ECG database. First of all the noise in ECG be Butterworth filtered, and then analysis the ECG signal based on wavelet transform to detect the parameters of the principle of singularity, more accurate detection of the QRS wave group was achieved.

  20. Identifying time measurement tampering in the traversal time and hop count analysis (TTHCA) wormhole detection algorithm.

    PubMed

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2013-01-01

    Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm. PMID:23686143

  1. Identifying Time Measurement Tampering in the Traversal Time and Hop Count Analysis (TTHCA) Wormhole Detection Algorithm

    PubMed Central

    Karlsson, Jonny; Dooley, Laurence S.; Pulkkis, Göran

    2013-01-01

    Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ΔT Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ΔT Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm. PMID:23686143

  2. Identifying time measurement tampering in the traversal time and hop count analysis (TTHCA) wormhole detection algorithm.

    PubMed

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2013-05-17

    Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm.

  3. A baseline algorithm for face detection and tracking in video

    NASA Astrophysics Data System (ADS)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  4. A computationally efficient QRS detection algorithm for wearable ECG sensors.

    PubMed

    Wang, Y; Deepu, C J; Lian, Y

    2011-01-01

    In this paper we present a novel Dual-Slope QRS detection algorithm with low computational complexity, suitable for wearable ECG devices. The Dual-Slope algorithm calculates the slopes on both sides of a peak in the ECG signal; And based on these slopes, three criterions are developed for simultaneously checking 1)Steepness 2)Shape and 3)Height of the signal, to locate the QRS complex. The algorithm, evaluated against MIT/BIH Arrhythmia Database, achieves a very high detection rate of 99.45%, a sensitivity of 99.82% and a positive prediction of 99.63%. PMID:22255619

  5. Detecting cosmic strings in the CMB with the Canny algorithm

    SciTech Connect

    Amsel, Stephen; Brandenberger, Robert H; Berger, Joshua E-mail: jb454@cornell.edu

    2008-04-15

    Line discontinuities in cosmic microwave background anisotropy maps are a distinctive prediction of models with cosmic strings. These signatures are visible in anisotropy maps with good angular resolution and should be identifiable using edge-detection algorithms. One such algorithm is the Canny algorithm. We study the potential of this algorithm to pick out the line discontinuities generated by cosmic strings. By applying the algorithm to small-scale microwave anisotropy maps generated from theoretical models with and without cosmic strings, we find that, given an angular resolution of several minutes of arc, cosmic strings can be detected down to a limit of the mass per unit length of the string which is one order of magnitude lower than the current upper bounds.

  6. AdaBoost-based algorithm for network intrusion detection.

    PubMed

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data. PMID:18348941

  7. Algorithm for Detecting Significant Locations from Raw GPS Data

    NASA Astrophysics Data System (ADS)

    Kami, Nobuharu; Enomoto, Nobuyuki; Baba, Teruyuki; Yoshikawa, Takashi

    We present a fast algorithm for probabilistically extracting significant locations from raw GPS data based on data point density. Extracting significant locations from raw GPS data is the first essential step of algorithms designed for location-aware applications. Assuming that a location is significant if users spend a certain time around that area, most current algorithms compare spatial/temporal variables, such as stay duration and a roaming diameter, with given fixed thresholds to extract significant locations. However, the appropriate threshold values are not clearly known in priori and algorithms with fixed thresholds are inherently error-prone, especially under high noise levels. Moreover, for N data points, they are generally O(N 2) algorithms since distance computation is required. We developed a fast algorithm for selective data point sampling around significant locations based on density information by constructing random histograms using locality sensitive hashing. Evaluations show competitive performance in detecting significant locations even under high noise levels.

  8. Vision-based algorithms for near-host object detection and multilane sensing

    NASA Astrophysics Data System (ADS)

    Kenue, Surender K.

    1995-01-01

    Vision-based sensing can be used for lane sensing, adaptive cruise control, collision warning, and driver performance monitoring functions of intelligent vehicles. Current computer vision algorithms are not robust for handling multiple vehicles in highway scenarios. Several new algorithms are proposed for multi-lane sensing, near-host object detection, vehicle cut-in situations, and specifying regions of interest for object tracking. These algorithms were tested successfully on more than 6000 images taken from real-highway scenes under different daytime lighting conditions.

  9. A Motion Detection Algorithm Using Local Phase Information.

    PubMed

    Lazar, Aurel A; Ukani, Nikul H; Zhou, Yiyin

    2016-01-01

    Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm.

  10. A Motion Detection Algorithm Using Local Phase Information.

    PubMed

    Lazar, Aurel A; Ukani, Nikul H; Zhou, Yiyin

    2016-01-01

    Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882

  11. Estimating the chance of success in IVF treatment using a ranking algorithm.

    PubMed

    Güvenir, H Altay; Misirli, Gizem; Dilbaz, Serdar; Ozdegirmenci, Ozlem; Demir, Berfu; Dilbaz, Berna

    2015-09-01

    In medicine, estimating the chance of success for treatment is important in deciding whether to begin the treatment or not. This paper focuses on the domain of in vitro fertilization (IVF), where estimating the outcome of a treatment is very crucial in the decision to proceed with treatment for both the clinicians and the infertile couples. IVF treatment is a stressful and costly process. It is very stressful for couples who want to have a baby. If an initial evaluation indicates a low pregnancy rate, decision of the couple may change not to start the IVF treatment. The aim of this study is twofold, firstly, to develop a technique that can be used to estimate the chance of success for a couple who wants to have a baby and secondly, to determine the attributes and their particular values affecting the outcome in IVF treatment. We propose a new technique, called success estimation using a ranking algorithm (SERA), for estimating the success of a treatment using a ranking-based algorithm. The particular ranking algorithm used here is RIMARC. The performance of the new algorithm is compared with two well-known algorithms that assign class probabilities to query instances. The algorithms used in the comparison are Naïve Bayes Classifier and Random Forest. The comparison is done in terms of area under the ROC curve, accuracy and execution time, using tenfold stratified cross-validation. The results indicate that the proposed SERA algorithm has a potential to be used successfully to estimate the probability of success in medical treatment.

  12. Line matching for automatic change detection algorithm

    NASA Astrophysics Data System (ADS)

    Dhollande, Jérôme; Monnin, David; Gond, Laetitia; Cudel, Christophe; Kohler, Sophie; Dieterlen, Alain

    2012-06-01

    During foreign operations, Improvised Explosive Devices (IEDs) are one of major threats that soldiers may unfortunately encounter along itineraries. Based on a vehicle-mounted camera, we propose an original approach by image comparison to detect signicant changes on these roads. The classic 2D-image registration techniques do not take into account parallax phenomena. The consequence is that the misregistration errors could be detected as changes. According to stereovision principles, our automatic method compares intensity proles along corresponding epipolar lines by extrema matching. An adaptive space warping compensates scale dierence in 3D-scene. When the signals are matched, the signal dierence highlights changes which are marked in current video.

  13. Detecting Community Structure by Using a Constrained Label Propagation Algorithm

    PubMed Central

    Ratnavelu, Kuru

    2016-01-01

    Community structure is considered one of the most interesting features in complex networks. Many real-world complex systems exhibit community structure, where individuals with similar properties form a community. The identification of communities in a network is important for understanding the structure of said network, in a specific perspective. Thus, community detection in complex networks gained immense interest over the last decade. A lot of community detection methods were proposed, and one of them is the label propagation algorithm (LPA). The simplicity and time efficiency of the LPA make it a popular community detection method. However, the LPA suffers from instability detection due to randomness that is induced in the algorithm. The focus of this paper is to improve the stability and accuracy of the LPA, while retaining its simplicity. Our proposed algorithm will first detect the main communities in a network by using the number of mutual neighbouring nodes. Subsequently, nodes are added into communities by using a constrained LPA. Those constraints are then gradually relaxed until all nodes are assigned into groups. In order to refine the quality of the detected communities, nodes in communities can be switched to another community or removed from their current communities at various stages of the algorithm. We evaluated our algorithm on three types of benchmark networks, namely the Lancichinetti-Fortunato-Radicchi (LFR), Relaxed Caveman (RC) and Girvan-Newman (GN) benchmarks. We also apply the present algorithm to some real-world networks of various sizes. The current results show some promising potential, of the proposed algorithm, in terms of detecting communities accurately. Furthermore, our constrained LPA has a robustness and stability that are significantly better than the simple LPA as it is able to yield deterministic results. PMID:27176470

  14. A two-level detection algorithm for optical fiber vibration

    NASA Astrophysics Data System (ADS)

    Bi, Fukun; Ren, Xuecong; Qu, Hongquan; Jiang, Ruiqing

    2015-09-01

    Optical fiber vibration is detected by the coherent optical time domain reflection technique. In addition to the vibration signals, the reflected signals include clutters and noises, which lead to a high false alarm rate. The "cell averaging" constant false alarm rate algorithm has a high computing speed, but its detection performance will be declined in nonhomogeneous environments such as multiple targets. The "order statistics" constant false alarm rate algorithm has a distinct advantage in multiple target environments, but it has a lower computing speed. An intelligent two-level detection algorithm is presented based on "cell averaging" constant false alarm rate and "order statistics" constant false alarm rate which work in serial way, and the detection speed of "cell averaging" constant false alarm rate and performance of "order statistics" constant false alarm rate are conserved, respectively. Through the adaptive selection, the "cell averaging" is applied in homogeneous environments, and the two-level detection algorithm is employed in nonhomogeneous environments. Our Monte Carlo simulation results demonstrate that considering different signal noise ratios, the proposed algorithm gives better detection probability than that of "order statistics".

  15. Algorithms to detect multiprotein modularity conserved during evolution.

    PubMed

    Hodgkinson, Luqman; Karp, Richard M

    2012-01-01

    Detecting essential multiprotein modules that change infrequently during evolution is a challenging algorithmic task that is important for understanding the structure, function, and evolution of the biological cell. In this paper, we define a measure of modularity for interactomes and present a linear-time algorithm, Produles, for detecting multiprotein modularity conserved during evolution that improves on the running time of previous algorithms for related problems and offers desirable theoretical guarantees. We present a biologically motivated graph theoretic set of evaluation measures complementary to previous evaluation measures, demonstrate that Produles exhibits good performance by all measures, and describe certain recurrent anomalies in the performance of previous algorithms that are not detected by previous measures. Consideration of the newly defined measures and algorithm performance on these measures leads to useful insights on the nature of interactomics data and the goals of previous and current algorithms. Through randomization experiments, we demonstrate that conserved modularity is a defining characteristic of interactomes. Computational experiments on current experimentally derived interactomes for Homo sapiens and Drosophila melanogaster, combining results across algorithms, show that nearly 10 percent of current interactome proteins participate in multiprotein modules with good evidence in the protein interaction data of being conserved between human and Drosophila.

  16. A Comparative Analysis of Community Detection Algorithms on Artificial Networks

    PubMed Central

    Yang, Zhao; Algesheimer, René; Tessone, Claudio J.

    2016-01-01

    Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms’ computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm’s predicting power and the effective computing time. PMID:27476470

  17. A Comparative Analysis of Community Detection Algorithms on Artificial Networks

    NASA Astrophysics Data System (ADS)

    Yang, Zhao; Algesheimer, René; Tessone, Claudio J.

    2016-08-01

    Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms’ computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm’s predicting power and the effective computing time.

  18. Novel automatic eye detection and tracking algorithm

    NASA Astrophysics Data System (ADS)

    Ghazali, Kamarul Hawari; Jadin, Mohd Shawal; Jie, Ma; Xiao, Rui

    2015-04-01

    The eye is not only one of the most complex but also the most important sensory organ of the human body. Eye detection and eye tracking are basement and hot issue in image processing. A non-invasive eye location and eye tracking is promising for hands-off gaze-based human-computer interface, fatigue detection, instrument control by paraplegic patients and so on. For this purpose, an innovation work frame is proposed to detect and tracking eye in video sequence in this paper. The contributions of this work can be divided into two parts. The first contribution is that eye filters were trained which can detect eye location efficiently and accurately without constraints on the background and skin colour. The second contribution is that a framework of tracker based on sparse representation and LK optic tracker were built which can track eye without constraint on eye status. The experimental results demonstrate the accuracy aspects and the real-time applicability of the proposed approach.

  19. Moving target detection algorithm based on Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Wang, Zhihua; Kai, Du; Zhang, Xiandong

    2013-07-01

    In real-time video surveillance system, background noise and disturbance for the detection of moving objects will have a significant impact. The traditional Gaussian mixture model;GMM&;has strong adaptive various complex background ability, but slow convergence speed and vulnerable to illumination change influence. the paper proposes an improved moving target detection algorithm based on Gaussian mixture model which increase the convergence rate of foreground to the background model transformation and introducing the concept of the changing factors, through the three frame differential method solved light mutation problem. The results show that this algorithm can improve the accuracy of the moving object detection, and has good stability and real-time.

  20. An ellipse detection algorithm based on edge classification

    NASA Astrophysics Data System (ADS)

    Yu, Liu; Chen, Feng; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    In order to enhance the speed and accuracy of ellipse detection, an ellipse detection algorithm based on edge classification is proposed. Too many edge points are removed by making edge into point in serialized form and the distance constraint between the edge points. It achieves effective classification by the criteria of the angle between the edge points. And it makes the probability of randomly selecting the edge points falling on the same ellipse greatly increased. Ellipse fitting accuracy is significantly improved by the optimization of the RED algorithm. It uses Euclidean distance to measure the distance from the edge point to the elliptical boundary. Experimental results show that: it can detect ellipse well in case of edge with interference or edges blocking each other. It has higher detecting precision and less time consuming than the RED algorithm.

  1. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  2. A fusion method of Gabor wavelet transform and unsupervised clustering algorithms for tissue edge detection.

    PubMed

    Ergen, Burhan

    2014-01-01

    This paper proposes two edge detection methods for medical images by integrating the advantages of Gabor wavelet transform (GWT) and unsupervised clustering algorithms. The GWT is used to enhance the edge information in an image while suppressing noise. Following this, the k-means and Fuzzy c-means (FCM) clustering algorithms are used to convert a gray level image into a binary image. The proposed methods are tested using medical images obtained through Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) devices, and a phantom image. The results prove that the proposed methods are successful for edge detection, even in noisy cases.

  3. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed. PMID:17932542

  4. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  5. Lidar detection algorithm for time and range anomalies

    NASA Astrophysics Data System (ADS)

    Ben-David, Avishai; Davidson, Charles E.; Vanderbeek, Richard G.

    2007-10-01

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t1 to t2" is addressed, and for range anomaly where the question "is a target present at time t within ranges R1 and R2" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO2 lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  6. Algorithms for airborne Doppler radar wind shear detection

    NASA Technical Reports Server (NTRS)

    Gillberg, Jeff; Pockrandt, Mitch; Symosek, Peter; Benser, Earl T.

    1992-01-01

    Honeywell has developed algorithms for the detection of wind shear/microburst using airborne Doppler radar. The Honeywell algorithms use three dimensional pattern recognition techniques and the selection of an associated scanning pattern forward of the aircraft. This 'volumetric scan' approach acquires reflectivity, velocity, and spectral width from a three dimensional volume as opposed to the conventional use of a two dimensional azimuthal slice of data at a fixed elevation. The algorithm approach is based on detection and classification of velocity patterns which are indicative of microburst phenomenon while minimizing the false alarms due to ground clutter return. Simulation studies of microburst phenomenon and x-band radar interaction with the microburst have been performed and results of that study are presented. Algorithm performance indetection of both 'wet' and 'dry' microbursts is presented.

  7. Adaptive clustering algorithm for community detection in complex networks.

    PubMed

    Ye, Zhenqing; Hu, Songnian; Yu, Jun

    2008-10-01

    Community structure is common in various real-world networks; methods or algorithms for detecting such communities in complex networks have attracted great attention in recent years. We introduced a different adaptive clustering algorithm capable of extracting modules from complex networks with considerable accuracy and robustness. In this approach, each node in a network acts as an autonomous agent demonstrating flocking behavior where vertices always travel toward their preferable neighboring groups. An optimal modular structure can emerge from a collection of these active nodes during a self-organization process where vertices constantly regroup. In addition, we show that our algorithm appears advantageous over other competing methods (e.g., the Newman-fast algorithm) through intensive evaluation. The applications in three real-world networks demonstrate the superiority of our algorithm to find communities that are parallel with the appropriate organization in reality. PMID:18999501

  8. Novel algorithm of road vehicles detection

    NASA Astrophysics Data System (ADS)

    Liu, Guangyao; Ye, Xiuqing; Gu, Weikang

    2001-09-01

    This paper presents a vision-based traffic scenes analysis system. We have developed a feature-based tracking approach of vehicle tracking. This important task is to group features that come from the same vehicle. The motion of features is detected by straight lines matching and motion vectors are reprojected to road to get reprojected velocities. According to the vehicle model, the vehicle height is presumed to estimate other features' height. Then the vehicle structure can be rebuilt by the relationship of feature's height and reprojected velocity. The rebuilt vehicle structure is verified by 3D model rules.

  9. Rapid and reliable diagnostic algorithm for detection of Clostridium difficile.

    PubMed

    Fenner, Lukas; Widmer, Andreas F; Goy, Gisela; Rudin, Sonja; Frei, Reno

    2008-01-01

    We evaluated a two-step algorithm for detection of Clostridium difficile in 1,468 stool specimens. First, specimens were screened by an immunoassay for C. difficile glutamate dehydrogenase antigen (C.DIFF CHEK-60). Second, screen-positive specimens underwent toxin testing by a rapid toxin A/B assay (TOX A/B QUIK CHEK); toxin-negative specimens were subjected to stool culture. This algorithm allowed final results for 92% of specimens with a turnaround time of 4 h.

  10. Algorithm of detecting structural variations in DNA sequences

    NASA Astrophysics Data System (ADS)

    Nałecz-Charkiewicz, Katarzyna; Nowak, Robert

    2014-11-01

    Whole genome sequencing enables to use the longest common subsequence algorithm to detect genetic structure variations. We propose to search position of short unique fragments, genetic markers, to achieve acceptable time and space complexity. The markers are generated by algorithms searching the genetic sequence or its Fourier transformation. The presented methods are checked on structural variations generated in silico on bacterial genomes giving the comparable or better results than other solutions.

  11. Subsurface biological activity zone detection using genetic search algorithms

    SciTech Connect

    Mahinthakumar, G.; Gwo, J.P.; Moline, G.R.; Webb, O.F.

    1999-12-01

    Use of generic search algorithms for detection of subsurface biological activity zones (BAZ) is investigated through a series of hypothetical numerical biostimulation experiments. Continuous injection of dissolved oxygen and methane with periodically varying concentration stimulates the cometabolism of indigenous methanotropic bacteria. The observed breakthroughs of methane are used to deduce possible BAZ in the subsurface. The numerical experiments are implemented in a parallel computing environment to make possible the large number of simultaneous transport simulations required by the algorithm. The results show that genetic algorithms are very efficient in locating multiple activity zones, provided the observed signals adequately sample the BAZ.

  12. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.

    PubMed

    Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K

    2010-03-21

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (p<0.05) by the two-step algorithm than by the one-step for 63% of all possible operating points. While operating at a suitable sensitivity level such as 90.8% (79/87) or 88.5% (77/87), the false positive rate was reduced by 24.4% (95% confidence intervals 17.9-31.0%) or 45.8% (95% confidence intervals 40.1-51.0%) respectively. We demonstrated that, with a proper experimental design, the Pareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.

  13. A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data

    SciTech Connect

    Gregoire, John M.; Dale, Darren; van Dover, R. Bruce

    2011-01-01

    Peak detection is ubiquitous in the analysis of spectral data. While many noise-filtering algorithms and peak identification algorithms have been developed, recent work [P. Du, W. Kibbe, and S. Lin, Bioinformatics 22, 2059 (2006); A. Wee, D. Grayden, Y. Zhu, K. Petkovic-Duran, and D. Smith, Electrophoresis 29, 4215 (2008)] has demonstrated that both of these tasks are efficiently performed through analysis of the wavelet transform of the data. In this paper, we present a wavelet-based peak detection algorithm with user-defined parameters that can be readily applied to the application of any spectral data. Particular attention is given to the algorithm's resolution of overlapping peaks. The algorithm is implemented for the analysis of powder diffraction data, and successful detection of Bragg peaks is demonstrated for both low signal-to-noise data from theta–theta diffraction of nanoparticles and combinatorial x-ray diffraction data from a composition spread thin film. These datasets have different types of background signals which are effectively removed in the wavelet-based method, and the results demonstrate that the algorithm provides a robust method for automated peak detection.

  14. Epidemic features affecting the performance of outbreak detection algorithms

    PubMed Central

    2012-01-01

    Background Outbreak detection algorithms play an important role in effective automated surveillance. Although many algorithms have been designed to improve the performance of outbreak detection, few published studies have examined how epidemic features of infectious disease impact on the detection performance of algorithms. This study compared the performance of three outbreak detection algorithms stratified by epidemic features of infectious disease and examined the relationship between epidemic features and performance of outbreak detection algorithms. Methods Exponentially weighted moving average (EWMA), cumulative sum (CUSUM) and moving percentile method (MPM) algorithms were applied. We inserted simulated outbreaks into notifiable infectious disease data in China Infectious Disease Automated-alert and Response System (CIDARS), and compared the performance of the three algorithms with optimized parameters at a fixed false alarm rate of 5% classified by epidemic features of infectious disease. Multiple linear regression was adopted to analyse the relationship of the algorithms’ sensitivity and timeliness with the epidemic features of infectious diseases. Results The MPM had better detection performance than EWMA and CUSUM through all simulated outbreaks, with or without stratification by epidemic features (incubation period, baseline counts and outbreak magnitude). The epidemic features were associated with both sensitivity and timeliness. Compared with long incubation, short incubation had lower probability (β* = −0.13, P < 0.001) but needed shorter time to detect outbreaks (β* = −0.57, P < 0.001). Lower baseline counts were associated with higher probability (β* = −0.20, P < 0.001) and longer time (β* = 0.14, P < 0.001). The larger outbreak magnitude was correlated with higher probability (β* = 0.55, P < 0.001) and shorter time (β* = −0.23, P < 0.001). Conclusions The results of this study suggest

  15. An Early Fire Detection Algorithm Using IP Cameras

    PubMed Central

    Millan-Garcia, Leonardo; Sanchez-Perez, Gabriel; Nakano, Mariko; Toscano-Medina, Karina; Perez-Meana, Hector; Rojas-Cardenas, Luis

    2012-01-01

    The presence of smoke is the first symptom of fire; therefore to achieve early fire detection, accurate and quick estimation of the presence of smoke is very important. In this paper we propose an algorithm to detect the presence of smoke using video sequences captured by Internet Protocol (IP) cameras, in which important features of smoke, such as color, motion and growth properties are employed. For an efficient smoke detection in the IP camera platform, a detection algorithm must operate directly in the Discrete Cosine Transform (DCT) domain to reduce computational cost, avoiding a complete decoding process required for algorithms that operate in spatial domain. In the proposed algorithm the DCT Inter-transformation technique is used to increase the detection accuracy without inverse DCT operation. In the proposed scheme, firstly the candidate smoke regions are estimated using motion and color smoke properties; next using morphological operations the noise is reduced. Finally the growth properties of the candidate smoke regions are furthermore analyzed through time using the connected component labeling technique. Evaluation results show that a feasible smoke detection method with false negative and false positive error rates approximately equal to 4% and 2%, respectively, is obtained. PMID:22778607

  16. Anomalies detection in hyperspectral imagery using projection pursuit algorithm

    NASA Astrophysics Data System (ADS)

    Achard, Veronique; Landrevie, Anthony; Fort, Jean Claude

    2004-11-01

    Hyperspectral imagery provides detailed spectral information on the observed scene which enhances detection possibility, in particular for subpixel targets. In this context, we have developed and compared several anomaly detection algorithms based on a projection pursuit approach. The projection pursuit is performed either on the ACP or on the MNF (Minimum Noise Fraction) components. Depending on the method, the best axes of the eigenvectors basis are directly selected, or a genetic algorithm is used in order to optimize the projections. Two projection index (PI) have been tested: the kurtosis and the skewness. These different approaches have been tested on Aviris and Hymap hyperspectral images, in which subpixel targets have been included by simulation. The proportion of target in pixels varies from 50% to 10% of the surface. The results are presented and discussed. The performance of our detection algorithm is very satisfactory for target surfaces until 10% of the pixel.

  17. Feature detection algorithms in computed images

    NASA Astrophysics Data System (ADS)

    Gurbuz, Ali Cafer

    2008-10-01

    The problem of sensing a medium by several sensors and retrieving interesting features is a very general one. The basic framework is generally the same for applications from MRI, tomography, Radar SAR imaging to subsurface imaging, even though the data acquisition processes, sensing geometries and sensed properties are different. In this thesis we introduced a new perspective to the problem of remote sensing and information retrieval by studying the problem of subsurface imaging using GPR and seismic sensors. We have shown that if the sensed medium is sparse in some domain then it can be imaged using many fewer measurements than required by the standard methods. This leads to much lower data acquisition times and better images. We have used the ideas from Compressive Sensing, which show that a small number of random measurements about a signal are sufficient to completely characterize it, if the signal is sparse or compressible in some domain. Although we have applied our ideas to the subsurface imaging problem, our results are general and can be extended to other remote sensing applications. A second objective in remote sensing is information retrieval which involves searching for important features in the computed image. In this thesis we focus on detecting buried structures like pipes, and tunnels in computed GPR or seismic images. The problem of finding these structures in high clutter and noise conditions, and finding them faster than the standard shape detecting methods is analyzed. One of the most important contributions of this thesis is where the sensing and the information retrieval stages are unified in a single framework using compressive sensing. Instead of taking lots of standard measurements to compute the image of the medium and search the necessary information in the computed image, only a small number of measurements as random projections are used to infer the information without generating the image of the medium.

  18. Detecting compact galactic binaries using a hybrid swarm-based algorithm

    NASA Astrophysics Data System (ADS)

    Bouffanais, Yann; Porter, Edward K.

    2016-03-01

    Compact binaries in our galaxy are expected to be one of the main sources of gravitational waves for the future eLISA mission. During the mission lifetime, many thousands of galactic binaries should be individually resolved. However, the identification of the sources and the extraction of the signal parameters in a noisy environment are real challenges for data analysis. So far, stochastic searches have proven to be the most successful for this problem. In this work, we present the first application of a swarm-based algorithm combining Particle Swarm Optimization and Differential Evolution. These algorithms have been shown to converge faster to global solutions on complicated likelihood surfaces than other stochastic methods. We first demonstrate the effectiveness of the algorithm for the case of a single binary in a 1-mHz search bandwidth. This interesting problem gave the algorithm plenty of opportunity to fail, as it can be easier to find a strong noise peak rather than the signal itself. After a successful detection of a fictitious low-frequency source, as well as the verification binary RXJ 0806.3 +1527 , we then applied the algorithm to the detection of multiple binaries, over different search bandwidths, in the cases of low and mild source confusion. In all cases, we show that we can successfully identify the sources and recover the true parameters within a 99% credible interval.

  19. A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms

    SciTech Connect

    Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.; Gosink, Luke J.; Anderson, Richard M.; Hays, Spencer E.; Tardiff, Mark F.

    2013-07-01

    There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another, our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.

  20. The Study of Randomized Visual Saliency Detection Algorithm

    PubMed Central

    Xu, Weihong; Kuang, Fangjun; Gao, Shangbing

    2013-01-01

    Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results. PMID:24382980

  1. The study of randomized visual saliency detection algorithm.

    PubMed

    Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing

    2013-01-01

    Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results.

  2. The Successive Projection Algorithm (SPA), an Algorithm with a Spatial Constraint for the Automatic Search of Endmembers in Hyperspectral Data

    PubMed Central

    Zhang, Jinkai; Rivard, Benoit; Rogge, D.M.

    2008-01-01

    Spectral mixing is a problem inherent to remote sensing data and results in few image pixel spectra representing ″pure″ targets. Linear spectral mixture analysis is designed to address this problem and it assumes that the pixel-to-pixel variability in a scene results from varying proportions of spectral endmembers. In this paper we present a different endmember-search algorithm called the Successive Projection Algorithm (SPA). SPA builds on convex geometry and orthogonal projection common to other endmember search algorithms by including a constraint on the spatial adjacency of endmember candidate pixels. Consequently it can reduce the susceptibility to outlier pixels and generates realistic endmembers.This is demonstrated using two case studies (AVIRIS Cuprite cube and Probe-1 imagery for Baffin Island) where image endmembers can be validated with ground truth data. The SPA algorithm extracts endmembers from hyperspectral data without having to reduce the data dimensionality. It uses the spectral angle (alike IEA) and the spatial adjacency of pixels in the image to constrain the selection of candidate pixels representing an endmember. We designed SPA based on the observation that many targets have spatial continuity (e.g. bedrock lithologies) in imagery and thus a spatial constraint would be beneficial in the endmember search. An additional product of the SPA is data describing the change of the simplex volume ratio between successive iterations during the endmember extraction. It illustrates the influence of a new endmember on the data structure, and provides information on the convergence of the algorithm. It can provide a general guideline to constrain the total number of endmembers in a search.

  3. Automated choroidal neovascularization detection algorithm for optical coherence tomography angiography

    PubMed Central

    Liu, Li; Gao, Simon S.; Bailey, Steven T.; Huang, David; Li, Dengwang; Jia, Yali

    2015-01-01

    Optical coherence tomography angiography has recently been used to visualize choroidal neovascularization (CNV) in participants with age-related macular degeneration. Identification and quantification of CNV area is important clinically for disease assessment. An automated algorithm for CNV area detection is presented in this article. It relies on denoising and a saliency detection model to overcome issues such as projection artifacts and the heterogeneity of CNV. Qualitative and quantitative evaluations were performed on scans of 7 participants. Results from the algorithm agreed well with manual delineation of CNV area. PMID:26417524

  4. Vision-based vehicle detection and tracking algorithm design

    NASA Astrophysics Data System (ADS)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  5. Star algorithm: detecting the ultrasonic endocardial boundary automatically.

    PubMed

    Yao, Wei; Tian, Jianming; Zhao, Baozhen; Chen, Ningning; Qian, Guozheng

    2004-07-01

    In clinical practice, interuser variability, high computational cost and low image quality are always big problems that puzzle the clinical application of computer-aided echocardiographic boundary detection. Star algorithm (StaA) is a new endocardial boundary detector that has been designed to overcome these problems. The purpose of the paper is to present the detection details of the algorithm and evaluate its clinical value. The main elements of StaA are radial search technique, cost function-driven system and self-designed edge detector. The algorithm has four main steps: image preprocessing, initial left ventricular chamber detection, left ventricular chamber center detection and left ventricular endocardial boundary detection. StaA was performed on 50 pairs of end-diastolic (ED) and end-systolic (ES) echocardiographic images, which were divided into high image-quality group (HImQ) and low image-quality group (LImQ). The results of the test were analyzed in two ways: 1. Compared with the manually-traced boundary, the mean relative radial error (MRRerr) of the computer-detected boundary was 12.07% and there is no significant difference of MRRerr between HImQ and LImQ. 2. The two-dimensional (2-D) ejection fraction calculated by the computer detected boundary (EFa) can be used interchangeably with that calculated by the manually-traced boundary (EFm). The test proves that simple and effective methods can also make the echocardiographic boundary detector automatic, quick and robust.

  6. A Survey of Successful Evaluations of Program Visualization and Algorithm Animation Systems

    ERIC Educational Resources Information Center

    Urquiza-Fuentes, Jaime; Velazquez-Iturbide, J. Angel

    2009-01-01

    This article reviews successful educational experiences in using program and algorithm visualizations (PAVs). First, we survey a total of 18 PAV systems that were subject to 33 evaluations. We found that half of the systems have only been tested for usability, and those were shallow inspections. The rest were evaluated with respect to their…

  7. A TCAS-II Resolution Advisory Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Narkawicz, Anthony; Chamberlain, James

    2013-01-01

    The Traffic Alert and Collision Avoidance System (TCAS) is a family of airborne systems designed to reduce the risk of mid-air collisions between aircraft. TCASII, the current generation of TCAS devices, provides resolution advisories that direct pilots to maintain or increase vertical separation when aircraft distance and time parameters are beyond designed system thresholds. This paper presents a mathematical model of the TCASII Resolution Advisory (RA) logic that assumes accurate aircraft state information. Based on this model, an algorithm for RA detection is also presented. This algorithm is analogous to a conflict detection algorithm, but instead of predicting loss of separation, it predicts resolution advisories. It has been formally verified that for a kinematic model of aircraft trajectories, this algorithm completely and correctly characterizes all encounter geometries between two aircraft that lead to a resolution advisory within a given lookahead time interval. The RA detection algorithm proposed in this paper is a fundamental component of a NASA sense and avoid concept for the integration of Unmanned Aircraft Systems in civil airspace.

  8. A Generative Statistical Algorithm for Automatic Detection of Complex Postures

    PubMed Central

    Amit, Yali; Biron, David

    2015-01-01

    This paper presents a method for automated detection of complex (non-self-avoiding) postures of the nematode Caenorhabditis elegans and its application to analyses of locomotion defects. Our approach is based on progressively detailed statistical models that enable detection of the head and the body even in cases of severe coilers, where data from traditional trackers is limited. We restrict the input available to the algorithm to a single digitized frame, such that manual initialization is not required and the detection problem becomes embarrassingly parallel. Consequently, the proposed algorithm does not propagate detection errors and naturally integrates in a “big data” workflow used for large-scale analyses. Using this framework, we analyzed the dynamics of postures and locomotion of wild-type animals and mutants that exhibit severe coiling phenotypes. Our approach can readily be extended to additional automated tracking tasks such as tracking pairs of animals (e.g., for mating assays) or different species. PMID:26439258

  9. Improved algorithm for quantum separability and entanglement detection

    SciTech Connect

    Ioannou, L.M.; Ekert, A.K.; Travaglione, B.C.; Cheung, D.

    2004-12-01

    Determining whether a quantum state is separable or entangled is a problem of fundamental importance in quantum information science. It has recently been shown that this problem is NP-hard, suggesting that an efficient, general solution does not exist. There is a highly inefficient 'basic algorithm' for solving the quantum separability problem which follows from the definition of a separable state. By exploiting specific properties of the set of separable states, we introduce a classical algorithm that solves the problem significantly faster than the 'basic algorithm', allowing a feasible separability test where none previously existed, e.g., in 3x3-dimensional systems. Our algorithm also provides a unique tool in the experimental detection of entanglement.

  10. Comparison Between Four Detection Algorithms for GEO Objects

    NASA Astrophysics Data System (ADS)

    Yanagisawa, T.; Uetsuhara, M.; Banno, H.; Kurosaki, H.; Kinoshita, D.; Kitazawa, Y.; Hanada, T.

    2012-09-01

    Four detection algorithms for GEO objects are being developed under the collaboration between Kyushu University, IHI corporation and JAXA. Each algorithm is designed to process CCD images to detect GEO objects. First one is PC based stacking method which has been developed in JAXA since 2000. Numerous CCD images are used to detect faint GEO objects below the limiting magnitude of a single CCD image. Sub-images are cropped from many CCD image to fit the movement of the objects. A median image of all the sub-images is then created. Although this method has an ability to detect faint objects, it takes time to analyze. Second one is the line-identifying technique which also uses many CCD frames and finds any series of objects that are arrayed on a straight line from the first frame to the last frame. This can analyze data faster than the stacking method, but cannot detect faint objects as the stacking method. Third one is the robust stacking method developed by IHI corporation which uses average instead of median to reduce analysis time. This has same analysis speed as the line-identifying technique and better detection capabilities in terms of the darkness. Forth one is the FPGA based stacking method which uses binalized images and a new algorithm installed in a FPGA board which reduce analysis time about one thousandth. All four algorithms analyzed the same sets of data to evaluate their advantages and disadvantages. By comparing their analysis times and results, an optimal usage of these algorithms are considered.

  11. An Adaptive Immune Genetic Algorithm for Edge Detection

    NASA Astrophysics Data System (ADS)

    Li, Ying; Bai, Bendu; Zhang, Yanning

    An adaptive immune genetic algorithm (AIGA) based on cost minimization technique method for edge detection is proposed. The proposed AIGA recommends the use of adaptive probabilities of crossover, mutation and immune operation, and a geometric annealing schedule in immune operator to realize the twin goals of maintaining diversity in the population and sustaining the fast convergence rate in solving the complex problems such as edge detection. Furthermore, AIGA can effectively exploit some prior knowledge and information of the local edge structure in the edge image to make vaccines, which results in much better local search ability of AIGA than that of the canonical genetic algorithm. Experimental results on gray-scale images show the proposed algorithm perform well in terms of quality of the final edge image, rate of convergence and robustness to noise.

  12. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    ERIC Educational Resources Information Center

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  13. Detecting Outliers in Factor Analysis Using the Forward Search Algorithm

    ERIC Educational Resources Information Center

    Mavridis, Dimitris; Moustaki, Irini

    2008-01-01

    In this article we extend and implement the forward search algorithm for identifying atypical subjects/observations in factor analysis models. The forward search has been mainly developed for detecting aberrant observations in regression models (Atkinson, 1994) and in multivariate methods such as cluster and discriminant analysis (Atkinson, Riani,…

  14. Robust algorithms for anatomic plane primitive detection in MR

    NASA Astrophysics Data System (ADS)

    Dewan, Maneesh; Zhan, Yiqiang; Peng, Zhigang; Zhou, Xiang Sean

    2009-02-01

    One of primary challenges in the medical image data analysis is the ability to handle abnormal, irregular and/or partial cases. In this paper, we present two different robust algorithms towards the goal of automatic planar primitive detection in 3D volumes. The overall algorithm is a bottoms-up approach starting with the anatomic point primitives (or landmarks) detection. The robustness in computing the planar primitives is built in through both a novel consensus-based voting approach, and a random sampling-based weighted least squares regression method. Both these approaches remove inconsistent landmarks and outliers detected in the landmark detection step. Unlike earlier approaches focused towards a particular plane, the presented approach is generic and can be easily adapted to computing more complex primitives such as ROIs or surfaces. To demonstrate the robustness and accuracy of our approach, we present extensive results for automatic plane detection (Mig-Sagittal and Optical Triangle planes) in brain MR-images. In comparison to ground truth, our approach has marginal errors on about 90 patients. The algorithm also works really well under adverse conditions of arbitrary rotation and cropping of the 3D volume. In order to exhibit generalization of the approach, we also present preliminary results on intervertebrae-plane detection for 3D spine MR application.

  15. Information dynamics algorithm for detecting communities in networks

    NASA Astrophysics Data System (ADS)

    Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro

    2012-11-01

    The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.

  16. A cloud detection algorithm using edge detection and information entropy over urban area

    NASA Astrophysics Data System (ADS)

    Zheng, Hong; Wen, Tianxiao; Li, Zhen

    2013-10-01

    Aiming at detecting cloud interference over urban area, an algorithm in this research is proposed to detect urban cloud area combining extracting edge information with information entropy, focusing on distinguishing complex surface features accurately to retain intact surface information. Firstly, image edge sharpening is used. Secondly, Canny edge detector and closing operation are applied to extract and strengthen edge features. Thirdly, information entropy extraction is adopted to ensure cloud positional accuracy. Compared with traditional cloud detection methods, this algorithm protects the integrity of urban surface features efficiently, improving the segmentation accuracy. Test results prove the effectiveness of this algorithm.

  17. Statistical iterative reconstruction using fast optimization transfer algorithm with successively increasing factor in Digital Breast Tomosynthesis

    NASA Astrophysics Data System (ADS)

    Xu, Shiyu; Zhang, Zhenxi; Chen, Ying

    2014-03-01

    Statistical iterative reconstruction exhibits particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description in transmission tomography system. However, to solve the objective function is computationally intensive compared to analytical reconstruction methods due to multiple iterations needed for convergence and each iteration involving forward/back-projections by using a complex geometric system model. Optimization transfer (OT) is a general algorithm converting a high dimensional optimization to a parallel 1-D update. OT-based algorithm provides a monotonic convergence and a parallel computing framework but slower convergence rate especially around the global optimal. Based on an indirect estimation on the spectrum of the OT convergence rate matrix, we proposed a successively increasing factor- scaled optimization transfer (OT) algorithm to seek an optimal step size for a faster rate. Compared to a representative OT based method such as separable parabolic surrogate with pre-computed curvature (PC-SPS), our algorithm provides comparable image quality (IQ) with fewer iterations. Each iteration retains a similar computational cost to PC-SPS. The initial experiment with a simulated Digital Breast Tomosynthesis (DBT) system shows that a total 40% computing time is saved by the proposed algorithm. In general, the successively increasing factor-scaled OT exhibits a tremendous potential to be a iterative method with a parallel computation, a monotonic and global convergence with fast rate.

  18. An artificial intelligent algorithm for tumor detection in screening mammogram.

    PubMed

    Zheng, L; Chan, A K

    2001-07-01

    Cancerous tumor mass is one of the major types of breast cancer. When cancerous masses are embedded in and camouflaged by varying densities of parenchymal tissue structures, they are very difficult to be visually detected on mammograms. This paper presents an algorithm that combines several artificial intelligent techniques with the discrete wavelet transform (DWT) for detection of masses in mammograms. The AI techniques include fractal dimension analysis, multiresolution markov random field, dogs-and-rabbits algorithm, and others. The fractal dimension analysis serves as a preprocessor to determine the approximate locations of the regions suspicious for cancer in the mammogram. The dogs-and-rabbits clustering algorithm is used to initiate the segmentation at the LL subband of a three-level DWT decomposition of the mammogram. A tree-type classification strategy is applied at the end to determine whether a given region is suspicious for cancer. We have verified the algorithm with 322 mammograms in the Mammographic Image Analysis Society Database. The verification results show that the proposed algorithm has a sensitivity of 97.3% and the number of false positive per image is 3.92.

  19. An Efficient Conflict Detection Algorithm for Packet Filters

    NASA Astrophysics Data System (ADS)

    Lee, Chun-Liang; Lin, Guan-Yu; Chen, Yaw-Chung

    Packet classification is essential for supporting advanced network services such as firewalls, quality-of-service (QoS), virtual private networks (VPN), and policy-based routing. The rules that routers use to classify packets are called packet filters. If two or more filters overlap, a conflict occurs and leads to ambiguity in packet classification. This study proposes an algorithm that can efficiently detect and resolve filter conflicts using tuple based search. The time complexity of the proposed algorithm is O(nW+s), and the space complexity is O(nW), where n is the number of filters, W is the number of bits in a header field, and s is the number of conflicts. This study uses the synthetic filter databases generated by ClassBench to evaluate the proposed algorithm. Simulation results show that the proposed algorithm can achieve better performance than existing conflict detection algorithms both in time and space, particularly for databases with large numbers of conflicts.

  20. Simple multiscale algorithm for layer detection with lidar.

    PubMed

    Mao, Feiyue; Gong, Wei; Zhu, Zhongmin

    2011-12-20

    Lidar is a powerful active remote sensing device used in the detection of the optical properties of aerosols and clouds. However, there are difficulties in layer detection and classification. Many previous methods are too complex for large dataset analysis or limited to data with too high a signal-to-noise ratio (SNR). In this study, a mechanism of multiscale detection and overdetection rejection is proposed based on a trend index function that we define. Finally, we classify layers based on connected layers employing a quantity known as the threshold of the peak-to-base ratio. We find good consistency between retrieved results employing our method and visual analysis. The testing of synthetic signals shows that our algorithm performs well with SNRs higher than 4. The results demonstrate that our algorithm is simple, practical, and suited to large dataset applications.

  1. Common Pharmacophore Identification Using Frequent Clique Detection Algorithm

    PubMed Central

    Podolyan, Yevgeniy; Karypis, George

    2008-01-01

    The knowledge of a pharmacophore, or the 3D arrangement of features in the biologically active molecule that is responsible for its pharmacological activity, can help in the search and design of a new or better drug acting upon the same or related target. In this paper we describe two new algorithms based on the frequent clique detection in the molecular graphs. The first algorithm mines all frequent cliques that are present in at least one of the conformers of each (or a portion of all) molecules. The second algorithm exploits the similarities among the different conformers of the same molecule and achieves an order of magnitude performance speedup compared to the first algorithm. Both algorithms are guaranteed to find all common pharmacophores in the dataset, which is confirmed by the validation on the set of molecules for which pharmacophores have been determined experimentally. In addition, these algorithms are able to scale to datasets with arbitrarily large number of conformers per molecule and identify multiple ligand binding modes or multiple binding sites of the target. PMID:19072298

  2. Algorithm for detecting human faces based on convex-hull

    NASA Astrophysics Data System (ADS)

    Park, Minsick; Park, Chang-Woo; Park, Mignon; Lee, Chang-Hoon

    2002-03-01

    In this paper, we proposed a new method to detect faces in color based on the convex-hull. We detect two kinds of regions that are skin and hair likeness region. After preprocessing, we apply the convex-hull to their regions and can find a face from their intersection relationship. The proposed algorithm can accomplish face detection in an image involving rotated and turned faces as well as several faces. To validity the effectiveness of the proposed method, we make experiment with various cases.

  3. SETI Pulse Detection Algorithm: Analysis of False-alarm Rates

    NASA Technical Reports Server (NTRS)

    Levitt, B. K.

    1983-01-01

    Some earlier work by the Search for Extraterrestrial Intelligence (SETI) Science Working Group (SWG) on the derivation of spectrum analyzer thresholds for a pulse detection algorithm based on an analysis of false alarm rates is extended. The algorithm previously analyzed was intended to detect a finite sequence of i periodically spaced pulses that did not necessarily occupy the entire observation interval. This algorithm would recognize the presence of such a signal only if all i-received pulse powers exceeded a threshold T(i): these thresholds were selected to achieve a desired false alarm rate, independent of i. To simplify the analysis, it was assumed that the pulses were synchronous with the spectrum sample times. This analysis extends the earlier effort to include infinite and/or asynchronous pulse trains. Furthermore, to decrease the possibility of missing an extraterrestrial intelligence signal, the algorithm was modified to detect a pulse train even if some of the received pulse powers fall below the threshold. The analysis employs geometrical arguments that make it conceptually easy to incorporate boundary conditions imposed on the derivation of the false alarm rates. While the exact results can be somewhat complex, simple closed form approximations are derived that produce a negligible loss of accuracy.

  4. Clever eye algorithm for target detection of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Geng, Xiurui; Ji, Luyan; Sun, Kang

    2016-04-01

    Target detection algorithms for hyperspectral remote sensing imagery, such as the two most commonly used remote sensing detection algorithms, the constrained energy minimization (CEM) and matched filter (MF), can usually be attributed to the inner product between a weight filter (or detector) and a pixel vector. CEM and MF have the same expression except that MF requires data centralization first. However, this difference leads to a difference in the target detection results. That is to say, the selection of the data origin could directly affect the performance of the detector. Therefore, does there exist another data origin other than the zero and mean-vector points for a better target detection performance? This is a very meaningful issue in the field of target detection, but it has not been paid enough attention yet. In this study, we propose a novel objective function by introducing the data origin as another variable, and the solution of the function is corresponding to the data origin with the minimal output energy. The process of finding the optimal solution can be vividly regarded as a clever eye automatically searching the best observing position and direction in the feature space, which corresponds to the largest separation between the target and background. Therefore, this new algorithm is referred to as the clever eye algorithm (CE). Based on the Sherman-Morrison formula and the gradient ascent method, CE could derive the optimal target detection result in terms of energy. Experiments with both synthetic and real hyperspectral data have verified the effectiveness of our method.

  5. Detection Systems and Algorithms for Multiplexed Quantum Dots

    NASA Astrophysics Data System (ADS)

    Goss, Kelly Christine

    Quantum Dots (QDs) are semiconductor nanocrystals that absorb light and re-emit at a wavelength dependent on its size and shape. A group of quantum dots can be designed to have a unique spectral emission by varying the size of the quantum dots (wavelength) and number of quantum dots (optical power) [1]. This technology is refered to as Multiplexed Quantum Dots (MxQD) and when it was first proposed, MxQD tags were created with 6 optical power levels and one QD colour or 3 QD colours and 2 optical power levels. It was hypothesized that a realistic limit to the number of tags would be a system of 6 optical power levels and 6 QD colours resulting in 46655 unique tags. In recent work, the fabrication and detection of 9 unique tags [2] was demonstrated which is still far from the predicted capability of the technology. The limitations affecting the large number of unique tags are both the fabrication methods and the data detection algorithms used to read the spectral emissions. This thesis makes contributions toward improving the data detection algorithms for MxQD tags. To accomplish this, a communications system model is developed that includes the inteference between QD colours, Inter-Symbol Interference (ISI), and additive noise. The model is developed for the two optical detectors, namely a Charge-Coupled Device (CCD) spectrometer and photodiode detectors. The model also includes an analytical expression for the Signal-to-Noise Ratio (SNR) of the detectors. For the CCD spectrometer, this model is verified with an experimental prototype. With the models in place, communications systems tools are applied that overcome both ISI and noise. This is an improvement over previous work in the field that only considered algorithms to overcome the ISI or noise separately. Specifically, this thesis outlines the proposal of a matched filter to improve SNR, a Minimum Mean Square Error (MMSE) equalizer that mitigates ISI in the presence of noise and a Maximum Likelihood Sequence

  6. Algorithms for the detection of chewing behavior in dietary monitoring applications

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Helal, Abdelsalam; Mendez-Vasquez, Andres

    2009-08-01

    The detection of food consumption is key to the implementation of successful behavior modification in support of dietary monitoring and therapy, for example, during the course of controlling obesity, diabetes, or cardiovascular disease. Since the vast majority of humans consume food via mastication (chewing), we have designed an algorithm that automatically detects chewing behaviors in surveillance video of a person eating. Our algorithm first detects the mouth region, then computes the spatiotemporal frequency spectrum of a small perioral region (including the mouth). Spectral data are analyzed to determine the presence of periodic motion that characterizes chewing. A classifier is then applied to discriminate different types of chewing behaviors. Our algorithm was tested on seven volunteers, whose behaviors included chewing with mouth open, chewing with mouth closed, talking, static face presentation (control case), and moving face presentation. Early test results show that the chewing behaviors induce a temporal frequency peak at 0.5Hz to 2.5Hz, which is readily detected using a distance-based classifier. Computational cost is analyzed for implementation on embedded processing nodes, for example, in a healthcare sensor network. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm, and its estimated error. It is shown that chewing detection is possible within a computationally efficient, accurate, and subject-independent framework.

  7. Multi-Objective Community Detection Based on Memetic Algorithm

    PubMed Central

    2015-01-01

    Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646

  8. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    PubMed

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  9. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    PubMed

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity. PMID:26736661

  10. Localization of tumors in various organs, using edge detection algorithms

    NASA Astrophysics Data System (ADS)

    López Vélez, Felipe

    2015-09-01

    The edge of an image is a set of points organized in a curved line, where in each of these points the brightness of the image changes abruptly, or has discontinuities, in order to find these edges there will be five different mathematical methods to be used and later on compared with its peers, this is with the aim of finding which of the methods is the one that can find the edges of any given image. In this paper these five methods will be used for medical purposes in order to find which one is capable of finding the edges of a scanned image more accurately than the others. The problem consists in analyzing the following two biomedicals images. One of them represents a brain tumor and the other one a liver tumor. These images will be analyzed with the help of the five methods described and the results will be compared in order to determine the best method to be used. It was decided to use different algorithms of edge detection in order to obtain the results shown below; Bessel algorithm, Morse algorithm, Hermite algorithm, Weibull algorithm and Sobel algorithm. After analyzing the appliance of each of the methods to both images it's impossible to determine the most accurate method for tumor detection due to the fact that in each case the best method changed, i.e., for the brain tumor image it can be noticed that the Morse method was the best at finding the edges of the image but for the liver tumor image it was the Hermite method. Making further observations it is found that Hermite and Morse have for these two cases the lowest standard deviations, concluding that these two are the most accurate method to find the edges in analysis of biomedical images.

  11. A new spectral variable selection pattern using competitive adaptive reweighted sampling combined with successive projections algorithm.

    PubMed

    Tang, Guo; Huang, Yue; Tian, Kuangda; Song, Xiangzhong; Yan, Hong; Hu, Jing; Xiong, Yanmei; Min, Shungeng

    2014-10-01

    The competitive adaptive reweighted sampling-successive projections algorithm (CARS-SPA) method was proposed as a novel variable selection approach to process multivariate calibration. The CARS was first used to select informative variables, and then SPA to refine the variables with minimum redundant information. The proposed method was applied to near-infrared (NIR) reflectance data of nicotine in tobacco lamina and NIR transmission data of active ingredient in pesticide formulation. As a result, fewer but more informative variables were selected by CARS-SPA than by direct CARS. In the system of pesticide formulation, a multiple linear regression (MLR) model using variables selected by CARS-SPA provided a better prediction than the full-range partial least-squares (PLS) model, successive projections algorithm (SPA) model and uninformative variables elimination-successive projections algorithm (UVE-SPA) processed model. The variable subsets selected by CARS-SPA included the spectral ranges with sufficient chemical information, whereas the uninformative variables were hardly selected.

  12. Evaluation of Stereo Algorithms for Obstacle Detection with Fisheye Lenses

    NASA Astrophysics Data System (ADS)

    Krombach, N.; Droeschel, D.; Behnke, S.

    2015-08-01

    For autonomous navigation of micro aerial vehicles (MAVs), a robust detection of obstacles with onboard sensors is necessary in order to avoid collisions. Cameras have the potential to perceive the surroundings of MAVs for the reconstruction of their 3D structure. We equipped our MAV with two fisheye stereo camera pairs to achieve an omnidirectional field-of-view. Most stereo algorithms are designed for the standard pinhole camera model, though. Hence, the distortion effects of the fisheye lenses must be properly modeled and model parameters must be identified by suitable calibration procedures. In this work, we evaluate the use of real-time stereo algorithms for depth reconstruction from fisheye cameras together with different methods for calibration. In our experiments, we focus on obstacles occurring in urban environments that are hard to detect due to their low diameter or homogeneous texture.

  13. Automatic target detection and discrimination algorithm applicable to ground penetrating radar data

    NASA Astrophysics Data System (ADS)

    Abeynayake, Canicious; Tran, Minh D.

    2015-05-01

    Ground Penetrating Radar (GPR) is considered as one of the promising technologies to address the challenges of detecting buried threat objects. However, the success rate of the GPR systems are limited by operational conditions and the robustness of automatic target recognition (ATR) algorithms embedded with the systems. In this paper an alternate ATR algorithm applicable to GPR is developed by combining image pre-processing and machine learning techniques. The aim of this research was to design a potential solution for detection of threat alarms using GPR data and reducing the number of false alarms through classification into one of the predefined categories of target types. The proposed ATR algorithm has been validated using a data set acquired by a vehicle-mounted GPR array. The data set utilized in this investigation involved greyscale GPR images of threat objects (both conventional and improvised) commonly found in realistic operational scenarios. Target based summaries of the algorithm performance are presented in terms of the probability of detection, false alarm rate, and confidence of allocating detections to a predefined target class.

  14. begin{center} MUSIC Algorithms for Rebar Detection

    NASA Astrophysics Data System (ADS)

    Leone, G.; Solimene, R.

    2012-04-01

    In this contribution we consider the problem of detecting and localizing small cross section, with respect to the wavelength, scatterers from their scattered field once a known incident field interrogated the scene where they reside. A pertinent applicative context is rebar detection within concrete pillar. For such a case, scatterers to be detected are represented by rebars themselves or by voids due to their lacking. In both cases, as scatterers have point-like support, a subspace projection method can be conveniently exploited [1]. However, as the field scattered by rebars is stronger than the one due to voids, it is expected that the latter can be difficult to be detected. In order to circumvent this problem, in this contribution we adopt a two-step MUltiple SIgnal Classification (MUSIC) detection algorithm. In particular, the first stage aims at detecting rebars. Once rebar are detected, their positions are exploited to update the Green's function and then a further detection scheme is run to locate voids. However, in this second case, background medium encompasses also the rabars. The analysis is conducted numerically for a simplified two-dimensional scalar scattering geometry. More in detail, as is usual in MUSIC algorithm, a multi-view/multi-static single-frequency configuration is considered [2]. Baratonia, G. Leone, R. Pierri, R. Solimene, "Fault Detection in Grid Scattering by a Time-Reversal MUSIC Approach," Porc. Of ICEAA 2011, Turin, 2011. E. A. Marengo, F. K. Gruber, "Subspace-Based Localization and Inverse Scattering of Multiply Scattering Point Targets," EURASIP Journal on Advances in Signal Processing, 2007, Article ID 17342, 16 pages (2007).

  15. Algorithm for Automated Detection of Edges of Clouds

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.

    2006-01-01

    An algorithm processes cloud-physics data gathered in situ by an aircraft, along with reflectivity data gathered by ground-based radar, to determine whether the aircraft is inside or outside a cloud at a given time. A cloud edge is deemed to be detected when the in/out state changes, subject to a hysteresis constraint. Such determinations are important in continuing research on relationships among lightning, electric charges in clouds, and decay of electric fields with distance from cloud edges.

  16. Fusion algorithm for poultry skin tumor detection using hyperspectral data

    NASA Astrophysics Data System (ADS)

    Nakariyakul, Songyot; Casasent, David

    2007-01-01

    We consider a feature selection method to detect skin tumors on chicken carcasses using hyperspectral (HS) reflectance data. Detection of chicken tumors is difficult because the tumors vary in size and shape; some tumors are small, early-stage tumor spots. We make use of the fact that a chicken skin tumor consists of a lesion region surrounded by a region of thickened skin and that the spectral responses of the lesion and the thickened-skin regions of tumors are considerably different and train our feature selection algorithm to separately detect lesion regions and thickened-skin regions; we then fuse the two HS detection results to reduce false alarms. To the best of our knowledge, these techniques are new. Our forward selection and modified branch and bound algorithm is used to select a small number of λ spectral features that are useful for discrimination. Initial results show that our method offers promise for a good tumor detection rate and a low false alarm rate.

  17. A Study of Lane Detection Algorithm for Personal Vehicle

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kazuyuki; Watanabe, Kajiro; Ohkubo, Tomoyuki; Kurihara, Yosuke

    By the word “Personal vehicle”, we mean a simple and lightweight vehicle expected to emerge as personal ground transportation devices. The motorcycle, electric wheelchair, motor-powered bicycle, etc. are examples of the personal vehicle and have been developed as the useful for transportation for a personal use. Recently, a new types of intelligent personal vehicle called the Segway has been developed which is controlled and stabilized by using on-board intelligent multiple sensors. The demand for needs for such personal vehicles are increasing, 1) to enhance human mobility, 2) to support mobility for elderly person, 3) reduction of environmental burdens. Since rapidly growing personal vehicles' market, a number of accidents caused by human error is also increasing. The accidents are caused by it's drive ability. To enhance or support drive ability as well as to prevent accidents, intelligent assistance is necessary. One of most important elemental functions for personal vehicle is robust lane detection. In this paper, we develop a robust lane detection method for personal vehicle at outdoor environments. The proposed lane detection method employing a 360 degree omni directional camera and unique robust image processing algorithm. In order to detect lanes, combination of template matching technique and Hough transform are employed. The validity of proposed lane detection algorithm is confirmed by actual developed vehicle at various type of sunshined outdoor conditions.

  18. Seismic detection algorithm and sensor deployment recommendations for perimeter security

    NASA Astrophysics Data System (ADS)

    Lacombe, James; Peck, Lindamae; Anderson, Thomas; Fisk, David

    2006-05-01

    Field studies were conducted in 2005 in Yuma, Arizona at the Yuma Proving Grounds (YPG) to document seismic signatures of walking humans. Walker-generated vertical ground vibrations were recorded using standard omni-directional 4.5 Hz peak-resonance geophones. Walker position and speed were measured using portable GPS equipment. Collected seismic data were processed and hypothetical sensor performance predictions were made using an algorithm developed for the detection and classification of a walking intruder. Sample results for the Yuma study are presented in the form of sensor detection/classification vs. range plots, and color-coded animations of seismic sensor alarm annunciations during walking intruder tests. A perimeter intrusion scenario for a Forward Operating Base is defined that involves a walker approaching a sensor picket-line along a path exactly halfway between two adjacent sensors. This is considered a conservative representation of the perimeter intrusion problem. Summary plots derived from a binomial probability based analysis define intruder detection probabilities for different sensor spacings. For a 215 lb intruder walking in the Yuma test environment, a 90% probability of at least two walker-classified sensor detections is achieved at a sensor spacing of 140 m. Preliminary investigations show the intruder classification component of the discussed detection/classification algorithm to perform well at rejecting signals associated with a nearby idling vehicle and normal background noise.

  19. Sparsity-based algorithm for detecting faults in rotating machines

    NASA Astrophysics Data System (ADS)

    He, Wangpeng; Ding, Yin; Zi, Yanyang; Selesnick, Ivan W.

    2016-05-01

    This paper addresses the detection of periodic transients in vibration signals so as to detect faults in rotating machines. For this purpose, we present a method to estimate periodic-group-sparse signals in noise. The method is based on the formulation of a convex optimization problem. A fast iterative algorithm is given for its solution. A simulated signal is formulated to verify the performance of the proposed approach for periodic feature extraction. The detection performance of comparative methods is compared with that of the proposed approach via RMSE values and receiver operating characteristic (ROC) curves. Finally, the proposed approach is applied to single fault diagnosis of a locomotive bearing and compound faults diagnosis of motor bearings. The processed results show that the proposed approach can effectively detect and extract the useful features of bearing outer race and inner race defect.

  20. Algorithm development for deeply buried threat detection in GPR data

    NASA Astrophysics Data System (ADS)

    Reichman, Daniël.; Malof, Jordan M.; Collins, Leslie M.

    2016-05-01

    Ground penetrating radar (GPR) is a popular remote sensing modality for buried threat detection. Many algorithms have been developed to detect buried threats using GPR data. One on-going challenge with GPR is the detection of very deeply buried targets. In this work a detection approach is proposed that improves the detection of very deeply buried targets, and interestingly, shallow targets as well. First, it is shown that the signal of a target (the target "signature") is well localized in time, and well correlated with the target's burial depth. This motivates the proposed approach, where GPR data is split into two disjoint subsets: an early and late portion corresponding to the time at which shallow and deep target signatures appear, respectively. Experiments are conducted on real GPR data using the previously published histogram of oriented gradients (HOG) prescreener: a fast supervised processing method operated on HOG features. The results show substantial improvements in detection of very deeply buried targets (4.1% to 17.2%) and in overall detection performance (81.1% to 83.9%). Further, it is shown that the performance of the proposed approach is relatively insensitive to the time at which the data is split. These results suggest that other detection methods may benefit from depth-based processing as well.

  1. Algorithm for Detecting a Bright Spot in an Image

    NASA Technical Reports Server (NTRS)

    2009-01-01

    An algorithm processes the pixel intensities of a digitized image to detect and locate a circular bright spot, the approximate size of which is known in advance. The algorithm is used to find images of the Sun in cameras aboard the Mars Exploration Rovers. (The images are used in estimating orientations of the Rovers relative to the direction to the Sun.) The algorithm can also be adapted to tracking of circular shaped bright targets in other diverse applications. The first step in the algorithm is to calculate a dark-current ramp a correction necessitated by the scheme that governs the readout of pixel charges in the charge-coupled-device camera in the original Mars Exploration Rover application. In this scheme, the fraction of each frame period during which dark current is accumulated in a given pixel (and, hence, the dark-current contribution to the pixel image-intensity reading) is proportional to the pixel row number. For the purpose of the algorithm, the dark-current contribution to the intensity reading from each pixel is assumed to equal the average of intensity readings from all pixels in the same row, and the factor of proportionality is estimated on the basis of this assumption. Then the product of the row number and the factor of proportionality is subtracted from the reading from each pixel to obtain a dark-current-corrected intensity reading. The next step in the algorithm is to determine the best location, within the overall image, for a window of N N pixels (where N is an odd number) large enough to contain the bright spot of interest plus a small margin. (In the original application, the overall image contains 1,024 by 1,024 pixels, the image of the Sun is about 22 pixels in diameter, and N is chosen to be 29.)

  2. Fast automatic algorithm for bifurcation detection in vascular CTA scans

    NASA Astrophysics Data System (ADS)

    Brozio, Matthias; Gorbunova, Vladlena; Godenschwager, Christian; Beck, Thomas; Bernhardt, Dominik

    2012-02-01

    Endovascular imaging aims at identifying vessels and their branches. Automatic vessel segmentation and bifurcation detection eases both clinical research and routine work. In this article a state of the art bifurcation detection algorithm is developed and applied on vascular computed tomography angiography (CTA) scans to mark the common iliac artery and its branches, the internal and external iliacs. In contrast to other methods our algorithm does not rely on a complete segmentation of a vessel in the 3D volume, but evaluates the cross-sections of the vessel slice by slice. Candidates for vessels are obtained by thresholding, following by 2D connected component labeling and prefiltering by size and position. The remaining candidates are connected in a squared distanced weighted graph. With Dijkstra algorithm the graph is traversed to get candidates for the arteries. We use another set of features considering length and shape of the paths to determine the best candidate and detect the bifurcation. The method was tested on 119 datasets acquired with different CT scanners and varying protocols. Both easy to evaluate datasets with high resolution and no apparent clinical diseases and difficult ones with low resolution, major calcifications, stents or poor contrast between the vessel and surrounding tissue were included. The presented results are promising, in 75.7% of the cases the bifurcation was labeled correctly, and in 82.7% the common artery and one of its branches were assigned correctly. The computation time was on average 0.49 s +/- 0.28 s, close to human interaction time, which makes the algorithm applicable for time-critical applications.

  3. Oscillation Detection Algorithm Development Summary Report and Test Plan

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    -based modal analysis algorithms have been developed. They include Prony analysis, Regularized Ro-bust Recursive Least Square (R3LS) algorithm, Yule-Walker algorithm, Yule-Walker Spectrum algorithm, and the N4SID algo-rithm. Each has been shown to be effective for certain situations, but not as effective for some other situations. For example, the traditional Prony analysis works well for disturbance data but not for ambient data, while Yule-Walker is designed for ambient data only. Even in an algorithm that works for both disturbance data and ambient data, such as R3LS, latency results from the time window used in the algorithm is an issue in timely estimation of oscillation modes. For ambient data, the time window needs to be longer to accumulate information for a reasonably accurate estimation; while for disturbance data, the time window can be significantly shorter so the latency in estimation can be much less. In addition, adding a known input signal such as noise probing signals can increase the knowledge of system oscillatory properties and thus improve the quality of mode estimation. System situations change over time. Disturbances can occur at any time, and probing signals can be added for a certain time period and then removed. All these observations point to the need to add intelligence to ModeMeter applications. That is, a ModeMeter needs to adaptively select different algorithms and adjust parameters for various situations. This project aims to develop systematic approaches for algorithm selection and parameter adjustment. The very first step is to detect occurrence of oscillations so the algorithm and parameters can be changed accordingly. The proposed oscillation detection approach is based on the signal-noise ratio of measurements.

  4. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is

  5. Detection of cracks in shafts with the Approximated Entropy algorithm

    NASA Astrophysics Data System (ADS)

    Sampaio, Diego Luchesi; Nicoletti, Rodrigo

    2016-05-01

    The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.

  6. Efficient detection and recognition algorithm of reference points in photogrammetry

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Liu, Gang; Zhu, Lichun; Li, Xiaofeng; Zhang, Yuhai; Shan, Siyu

    2016-04-01

    In photogrammetry, an approach of automatic detection and recognition on reference points have been proposed to meet the requirements on detection and matching of reference points. The reference points used here are the CCT(circular coded target), which compose of two parts: the round target point in central region and the circular encoding band in surrounding region. Firstly, the contours of image are extracted, after that noises and disturbances of the image are filtered out by means of a series of criteria, such as the area of the contours, the correlation coefficient between two regions of contours etc. Secondly, the cubic spline interpolation is adopted to process the central contour region of the CCT. The contours of the interpolated image are extracted again, then the least square ellipse fitting is performed to calculate the center coordinates of the CCT. Finally, the encoded value is obtained by the angle information from the circular encoding band of the CCT. From the experiment results, the location precision of the CCT can be achieved to sub-pixel level of the algorithm presented. Meanwhile the recognition accuracy is pretty high, even if the background of the image is complex and full of disturbances. In addition, the property of the algorithm is robust. Furthermore, the runtime of the algorithm is fast.

  7. Runway Safety Monitor Algorithm for Runway Incursion Detection and Alerting

    NASA Technical Reports Server (NTRS)

    Green, David F., Jr.; Jones, Denise R. (Technical Monitor)

    2002-01-01

    The Runway Safety Monitor (RSM) is an algorithm for runway incursion detection and alerting that was developed in support of NASA's Runway Incursion Prevention System (RIPS) research conducted under the NASA Aviation Safety Program's Synthetic Vision System element. The RSM algorithm provides pilots with enhanced situational awareness and warnings of runway incursions in sufficient time to take evasive action and avoid accidents during landings, takeoffs, or taxiing on the runway. The RSM currently runs as a component of the NASA Integrated Display System, an experimental avionics software system for terminal area and surface operations. However, the RSM algorithm can be implemented as a separate program to run on any aircraft with traffic data link capability. The report documents the RSM software and describes in detail how RSM performs runway incursion detection and alerting functions for NASA RIPS. The report also describes the RIPS flight tests conducted at the Dallas-Ft Worth International Airport (DFW) during September and October of 2000, and the RSM performance results and lessons learned from those flight tests.

  8. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.

    PubMed

    Amudha, P; Karthik, S; Sivakumari, S

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  9. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    PubMed Central

    Amudha, P.; Karthik, S.; Sivakumari, S.

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  10. Jump point detection for real estate investment success

    NASA Astrophysics Data System (ADS)

    Hui, Eddie C. M.; Yu, Carisa K. W.; Ip, Wai-Cheung

    2010-03-01

    In the literature, studies on real estate market were mainly concentrating on the relation between property price and some key factors. The trend of the real estate market is a major concern. It is believed that changes in trend are signified by some jump points in the property price series. Identifying such jump points reveals important findings that enable policy-makers to look forward. However, not all jump points are observable from the plot of the series. This paper looks into the trend and introduces a new approach to the framework for real estate investment success. The main purpose of this paper is to detect jump points in the time series of some housing price indices and stock price index in Hong Kong by applying the wavelet analysis. The detected jump points reflect to some significant political issues and economic collapse. Moreover, the relations among properties of different classes and between stocks and properties are examined. It can be shown from the empirical result that a lead-lag effect happened between the prices of large-size property and those of small/medium-size property. However, there is no apparent relation or consistent lead in terms of change point measure between property price and stock price. This may be due to the fact that globalization effect has more impact on the stock price than the property price.

  11. Firefly Algorithm in detection of TEC seismo-ionospheric anomalies

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, Mehdi

    2015-07-01

    Anomaly detection in time series of different earthquake precursors is an essential introduction to create an early warning system with an allowable uncertainty. Since these time series are more often non linear, complex and massive, therefore the applied predictor method should be able to detect the discord patterns from a large data in a short time. This study acknowledges Firefly Algorithm (FA) as a simple and robust predictor to detect the TEC (Total Electron Content) seismo-ionospheric anomalies around the time of the some powerful earthquakes including Chile (27 February 2010), Varzeghan (11 August 2012) and Saravan (16 April 2013). Outstanding anomalies were observed 7 and 5 days before the Chile and Varzeghan earthquakes, respectively and also 3 and 8 days prior to the Saravan earthquake.

  12. A morphological algorithm for improving radio-frequency interference detection

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; van de Gronde, J. J.; Roerdink, J. B. T. M.

    2012-03-01

    A technique is described that is used to improve the detection of radio-frequency interference in astronomical radio observatories. It is applied on a two-dimensional interference mask after regular detection in the time-frequency domain with existing techniques. The scale-invariant rank (SIR) operator is defined, which is a one-dimensional mathematical morphology technique that can be used to find adjacent intervals in the time or frequency domain that are likely to be affected by RFI. The technique might also be applicable in other areas in which morphological scale-invariant behaviour is desired, such as source detection. A new algorithm is described, that is shown to perform quite well, has linear time complexity and is fast enough to be applied in modern high resolution observatories. It is used in the default pipeline of the LOFAR observatory.

  13. Successive approximation algorithm for beam-position-monitor-based LHC collimator alignment

    NASA Astrophysics Data System (ADS)

    Valentino, Gianluca; Nosych, Andriy A.; Bruce, Roderik; Gasior, Marek; Mirarchi, Daniele; Redaelli, Stefano; Salvachua, Belen; Wollmann, Daniel

    2014-02-01

    Collimators with embedded beam position monitor (BPM) button electrodes will be installed in the Large Hadron Collider (LHC) during the current long shutdown period. For the subsequent operation, BPMs will allow the collimator jaws to be kept centered around the beam orbit. In this manner, a better beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation and takes into account a correction of the nonlinear BPM sensitivity to beam displacement and an asymmetry of the electronic channels processing the BPM electrode signals. A software implementation was tested with a prototype collimator in the Super Proton Synchrotron. This paper presents results of the tests along with some considerations for eventual operation in the LHC.

  14. A novel dynamical community detection algorithm based on weighting scheme

    NASA Astrophysics Data System (ADS)

    Li, Ju; Yu, Kai; Hu, Ke

    2015-12-01

    Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.

  15. A Duffing oscillator algorithm to detect the weak chromatographic signal.

    PubMed

    Zhang, Wei; Xiang, Bing-Ren

    2007-02-28

    Based on the Duffing equation, a Duffing oscillator algorithm (DOA) to improve the signal-to-noise ratio (SNR) was presented. By simulated and experimental data sets, it was proven that the signal-to-noise ratio (SNR) of the weak signal could be greatly enhanced by this method. Using signal enhancement by DOA, this method extends the SNR of low concentrations of methylbenzene from 2.662 to 29.90 and the method can be used for quantitative analysis of methylbenzene, which are lower than detection limit of an analytical system. The Duffing oscillator algorithm (DOA) might be a promising tool to extend instrumental linear range and to improve the accuracy of trace analysis. The research enlarged the application scope of Duffing equation to chromatographic signal processing.

  16. Comparing Several Algorithms for Change Detection of Wetland

    NASA Astrophysics Data System (ADS)

    Yan, F.; Zhang, S.; Chang, L.

    2015-12-01

    As "the kidneys of the landscape" and "ecological supermarkets", wetland plays an important role in ecological equilibrium and environmental protection.Therefore, it is of great significance to understand the dynamic changes of the wetland. Nowadays, many index and many methods have been used in dynamic Monitoring of Wetland. However, there are no single method and no single index are adapted to detect dynamic change of wetland all over the world. In this paper, three digital change detection algorithms are applied to 2005 and 2010 Landsat Thematic Mapper (TM) images of a portion of the Northeast China to detect wetland dynamic between the two dates. The change vector analysis method (CVA) uses 6 bands of TM images to detect wetland dynamic. The tassled cap transformation is used to create three change images (change in brightness, greenness, and wetness). A new method--- Comprehensive Change Detection Method (CCDM) is introduced to detect forest dynamic change. The CCDM integrates spectral-based change detection algorithms including a Multi-Index Integrated Change Analysis (MIICA) model and a novel change model called Zone, which extracts change information from two Landsat image pairs. The MIICA model is the core module of the change detection strategy and uses four spectral indices (differenced Normalized Burn Ratio (dNBR), differenced Normalized Difference Vegetation Index (dNDVI), the Change Vector (CV) and a new index called the Relative Change Vector Maximum (RCVMAX)) to obtain the changes that occurred between two image dates. The CCDM also includes a knowledge-based system, which uses critical information on historical and current land cover conditions and trends and the likelihood of land cover change, to combine the changes from MIICA and Zone. Related test proved that CCDM method is simple, easy to operate, widely applicable, and capable of capturing a variety of natural and anthropogenic disturbances potentially associated with land cover changes on

  17. [Application of successive projections algorithm to nondestructive determination of total amino acids in oilseed rape leaves].

    PubMed

    Liu, Fei; Zhang, Fan; Fang, Hui; Jin, Zong-Lai; Zhou, Wei-Jun; He, Yong

    2009-11-01

    Near infrared (NIR) spectroscopy combined with successive projections algorithm (SPA) was investigated for the fast and nondestructive determination of total amino acids (TAA) in oilseed rape leaves. Total amino acids are important indices of the growing status of oilseed rape. A total of 150 leave samples were scanned, the calibration set was composed of 80 samples, the validation set was composed of 40 samples and the prediction set was composed of 30 samples. The optimal partial least squares (PLS) model was developed for the prediction of total amino acids in oilseed rape leaves after the performance comparison of different pretreatments, including smoothing method, standard normal variate (SNV), the first derivative and second derivative. Simultaneously, successive projections algorithm was applied for the extraction of effective wavelengths (EWs), which were thought to have least collinearity and redundancies in the spectral data. The selected effective wavelengths were used as the inputs of multiple linear regression (MLR), partial least squares (PLS) and least square-support vector machine (LS-SVM). Then the SPA-MLR, SPA-PLS and SPA-LS-SVM models were developed for performance comparison. The determination coefficient (R2) and root mean square error (RMSE) were used as the model evaluation indices. The results indicated that both SPA-MLR and SPA-PLS models were better than full-spectrum PLS model, and the best performance was achieved by SPA-LS-SVM model with R2 = 0.983 0 and RMSEP = 0.396 4. An excellent prediction precision was achieved. In conclusion, successive projections algorithm is a powerful way for effective wavelength selection, and it is feasible to determine the total amino acids in oil-seed rape leaves using near infrared spectroscopy and SPA-LS-SVM, and an excellent prediction precision was obtained. This study supplied a new and alternative approach to the further application of near infrared spectroscopy in the response of stress and on

  18. A New Algorithm for 3D Dendritic Spine Detection

    NASA Astrophysics Data System (ADS)

    Zhou, Wengang; Li, Houqiang; Zhou, Xiaobo; Wong, Stephen

    2007-11-01

    It has been shown in recent research that there is a close relationship between neurological functions of neuron and its morphology. As manual analysis of large data sets is too tedious and may be subjected to user bias, a computer aided processing method is urgently desired. In this paper, we propose an automatic approach for 3D dendritic spine detection, which can greatly help neuron-biologists to obtain morphological information about a neuron and its spines. The work mainly consists of segmentation and spine component detection. The segmentation of dendrite and spine components is carried out by means of 3D level set based on local binary fitting model, which yields better results than global threshold method. As for spine component detection, an efficient approach is presented which consists of backbone extraction, detached and attached spine components detection. The detection is robust to noise and the detected spines are well represented. We validate our algorithm with real 3D neuron images and the result reveals that it works well.

  19. Improved Bat algorithm for the detection of myocardial infarction.

    PubMed

    Kora, Padmavathi; Kalva, Sri Ramakrishna

    2015-01-01

    The medical practitioners study the electrical activity of the human heart in order to detect heart diseases from the electrocardiogram (ECG) of the heart patients. A myocardial infarction (MI) or heart attack is a heart disease, that occurs when there is a block (blood clot) in the pathway of one or more coronary blood vessels (arteries) that supply blood to the heart muscle. The abnormalities in the heart can be identified by the changes in the ECG signal. The first step in the detection of MI is Preprocessing of ECGs which removes noise by using filters. Feature extraction is the next key process in detecting the changes in the ECG signals. This paper presents a method for extracting key features from each cardiac beat using Improved Bat algorithm. Using this algorithm best features are extracted, then these best (reduced) features are applied to the input of the neural network classifier. It has been observed that the performance of the classifier is improved with the help of the optimized features. PMID:26558169

  20. EEG seizure detection and prediction algorithms: a survey

    NASA Astrophysics Data System (ADS)

    Alotaiby, Turkey N.; Alshebeili, Saleh A.; Alshawi, Tariq; Ahmad, Ishtiaq; Abd El-Samie, Fathi E.

    2014-12-01

    Epilepsy patients experience challenges in daily life due to precautions they have to take in order to cope with this condition. When a seizure occurs, it might cause injuries or endanger the life of the patients or others, especially when they are using heavy machinery, e.g., deriving cars. Studies of epilepsy often rely on electroencephalogram (EEG) signals in order to analyze the behavior of the brain during seizures. Locating the seizure period in EEG recordings manually is difficult and time consuming; one often needs to skim through tens or even hundreds of hours of EEG recordings. Therefore, automatic detection of such an activity is of great importance. Another potential usage of EEG signal analysis is in the prediction of epileptic activities before they occur, as this will enable the patients (and caregivers) to take appropriate precautions. In this paper, we first present an overview of seizure detection and prediction problem and provide insights on the challenges in this area. Second, we cover some of the state-of-the-art seizure detection and prediction algorithms and provide comparison between these algorithms. Finally, we conclude with future research directions and open problems in this topic.

  1. Efficient implementations of hyperspectral chemical-detection algorithms

    NASA Astrophysics Data System (ADS)

    Brett, Cory J. C.; DiPietro, Robert S.; Manolakis, Dimitris G.; Ingle, Vinay K.

    2013-10-01

    Many military and civilian applications depend on the ability to remotely sense chemical clouds using hyperspectral imagers, from detecting small but lethal concentrations of chemical warfare agents to mapping plumes in the aftermath of natural disasters. Real-time operation is critical in these applications but becomes diffcult to achieve as the number of chemicals we search for increases. In this paper, we present efficient CPU and GPU implementations of matched-filter based algorithms so that real-time operation can be maintained with higher chemical-signature counts. The optimized C++ implementations show between 3x and 9x speedup over vectorized MATLAB implementations.

  2. NASA airborne radar wind shear detection algorithm and the detection of wet microbursts in the vicinity of Orlando, Florida

    NASA Technical Reports Server (NTRS)

    Britt, Charles L.; Bracalente, Emedio M.

    1992-01-01

    The algorithms used in the NASA experimental wind shear radar system for detection, characterization, and determination of windshear hazard are discussed. The performance of the algorithms in the detection of wet microbursts near Orlando is presented. Various suggested algorithms that are currently being evaluated using the flight test results from Denver and Orlando are reviewed.

  3. Application of multistatic inversion algorithms to landmine detection

    NASA Astrophysics Data System (ADS)

    Gürbüz, Ali Cafer; Counts, Tegan; Kim, Kangwook; McClellan, James H.; Scott, Waymond R., Jr.

    2006-05-01

    Multi-static ground-penetrating radar (GPR) uses an array of antennas to conduct a number of bistatic operations simultaneously. The multi-static GPR is used to obtain more information on the target of interest using angular diversity. An entirely computer controlled, multi-static GPR consisting of a linear array of six resistively-loaded vee dipoles (RVDs), a network analyzer, and a microwave switch matrix was developed to investigate the potential of multi-static inversion algorithms. The performance of a multi-static inversion algorithm is evaluated for targets buried in clean sand, targets buried under the ground covered by rocks, and targets held above the ground (in the air) using styrofoam supports. A synthetic-aperture, multi-static, time-domain GPR imaging algorithm is extended from conventional mono-static back-projection techniques and used to process the data. Good results are obtained for the clean surface and air targets; however, for targets buried under rocks, only the deeply buried targets could be accurately detected and located.

  4. Geolocation Assessment Algorithm for CALIPSO Using Coastline Detection

    NASA Technical Reports Server (NTRS)

    Currey, J. Chris

    2002-01-01

    Cloud-Aerosol Lidar Infrared Pathfinder Satellite Observations (CALIPSO) is a joint satellite mission between NASA and the French space agency CNES. The investigation will gather long-term, global cloud and aerosol optical and physical properties to improve climate models. The CALIPSO spacecraft is scheduled to launch in 2004 into a 98.2 inclination, 705 km circular orbit approximately 3 minutes behind the Aqua spacecraft. The payload consists of a two-wavelength polarization-sensitive lidar, and two passive imagers operating in the visible (0.645 mm) and infrared (8.7 - 12.0 mm) spectral regions. The imagers are nadir viewing and co-aligned with the lidar. Earth viewing measurements are geolocated to the Earth fixed coordinate system using satellite ephemeris, Earth rotation and geoid, and instrument pointing data. The coastline detection algorithm will assess the accuracy of the CALIPSO geolocation process by analyzing Wide Field Camera (WFC) visible ocean land boundaries. Processing space-time coincident MODIS and WFC scenes with the coastline algorithm will help verify the co-registration requirement with Moderate Resolution Imaging Spectrometer (MODIS) data. This paper quantifies the accuracy of the coastline geolocation assessment algorithm.

  5. Algorithm of semicircular laser spot detection based on circle fitting

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzhou; Xu, Ruihua; Hu, Bingliang

    2013-07-01

    In order to obtain the exact center of an asymmetrical and semicircular aperture laser spot, a method for laser spot detection method based on circle fitting was proposed in this paper, threshold of laser spot image was segmented by the method of gray morphology algorithm, rough edge of laser spot was detected in both vertical and horizontal direction, short arcs and isolated edge points were deleted by contour growing, the best circle contour was obtained by iterative fitting and the final standard round was fitted in the end. The experimental results show that the precision of the method is obviously better than the gravity model method being used in the traditional large laser automatic alignment system. The accuracy of the method to achieve asymmetrical and semicircular laser spot center meets the requirements of the system.

  6. Vibration-based damage detection algorithm for WTT structures

    NASA Astrophysics Data System (ADS)

    Nguyen, Tuan-Cuong; Kim, Tae-Hwan; Choi, Sang-Hoon; Ryu, Joo-Young; Kim, Jeong-Tae

    2016-04-01

    In this paper, the integrity of a wind turbine tower (WTT) structure is nondestructively estimated using its vibration responses. Firstly, a damage detection algorithm using changes in modal characteristics to predict damage locations and severities in structures is outlined. Secondly, a finite element (FE) model based on a real WTT structure is established by using a commercial software, Midas FEA. Thirdly, forced vibration tests are performed on the FE model of the WTT structure under various damage scenarios. The changes in modal parameters such as natural frequencies and mode shapes are examined for damage monitoring in the structure. Finally, the feasibility of the vibration-based damage detection method is numerically verified by predicting locations and severities of the damage in the FE model of the WTT structure.

  7. A new detection algorithm for microcalcification clusters in mammographic screening

    NASA Astrophysics Data System (ADS)

    Xie, Weiying; Ma, Yide; Li, Yunsong

    2015-05-01

    A novel approach for microcalcification clusters detection is proposed. At the first time, we make a short analysis of mammographic images with microcalcification lesions to confirm these lesions have much greater gray values than normal regions. After summarizing the specific feature of microcalcification clusters in mammographic screening, we make more focus on preprocessing step including eliminating the background, image enhancement and eliminating the pectoral muscle. In detail, Chan-Vese Model is used for eliminating background. Then, we do the application of combining morphology method and edge detection method. After the AND operation and Sobel filter, we use Hough Transform, it can be seen that the result have outperformed for eliminating the pectoral muscle which is approximately the gray of microcalcification. Additionally, the enhancement step is achieved by morphology. We make effort on mammographic image preprocessing to achieve lower computational complexity. As well known, it is difficult to robustly achieve mammograms analysis due to low contrast between normal and lesion tissues, there are also much noise in such images. After a serious preprocessing algorithm, a method based on blob detection is performed to microcalcification clusters according their specific features. The proposed algorithm has employed Laplace operator to improve Difference of Gaussians (DoG) function in terms of low contrast images. A preliminary evaluation of the proposed method performs on a known public database namely MIAS, rather than synthetic images. The comparison experiments and Cohen's kappa coefficients all demonstrate that our proposed approach can potentially obtain better microcalcification clusters detection results in terms of accuracy, sensitivity and specificity.

  8. Para-GMRF: parallel algorithm for anomaly detection of hyperspectral image

    NASA Astrophysics Data System (ADS)

    Dong, Chao; Zhao, Huijie; Li, Na; Wang, Wei

    2007-12-01

    The hyperspectral imager is capable of collecting hundreds of images corresponding to different wavelength channels for the observed area simultaneously, which make it possible to discriminate man-made objects from natural background. However, the price paid for the wealthy information is the enormous amounts of data, usually hundreds of Gigabytes per day. Turning the huge volume data into useful information and knowledge in real time is critical for geoscientists. In this paper, the proposed parallel Gaussian-Markov random field (Para-GMRF) anomaly detection algorithm is an attempt of applying parallel computing technology to solve the problem. Based on the locality of GMRF algorithm, we partition the 3-D hyperspectral image cube in spatial domain and distribute data blocks to multiple computers for concurrent detection. Meanwhile, to achieve load balance, a work pool scheduler is designed for task assignment. The Para-GMRF algorithm is organized in master-slave architecture, coded in C programming language using message passing interface (MPI) library and tested on a Beowulf cluster. Experimental results show that Para-GMRF algorithm successfully conquers the challenge and can be used in time sensitive areas, such as environmental monitoring and battlefield reconnaissance.

  9. The Successive Projections Algorithm for interval selection in trilinear partial least-squares with residual bilinearization.

    PubMed

    Gomes, Adriano de Araújo; Alcaraz, Mirta Raquel; Goicoechea, Hector C; Araújo, Mario Cesar U

    2014-02-01

    In this work the Successive Projection Algorithm is presented for intervals selection in N-PLS for three-way data modeling. The proposed algorithm combines noise-reduction properties of PLS with the possibility of discarding uninformative variables in SPA. In addition, second-order advantage can be achieved by the residual bilinearization (RBL) procedure when an unexpected constituent is present in a test sample. For this purpose, SPA was modified in order to select intervals for use in trilinear PLS. The ability of the proposed algorithm, namely iSPA-N-PLS, was evaluated on one simulated and two experimental data sets, comparing the results to those obtained by N-PLS. In the simulated system, two analytes were quantitated in two test sets, with and without unexpected constituent. In the first experimental system, the determination of the four fluorophores (l-phenylalanine; l-3,4-dihydroxyphenylalanine; 1,4-dihydroxybenzene and l-tryptophan) was conducted with excitation-emission data matrices. In the second experimental system, quantitation of ofloxacin was performed in water samples containing two other uncalibrated quinolones (ciprofloxacin and danofloxacin) by high performance liquid chromatography with UV-vis diode array detector. For comparison purpose, a GA algorithm coupled with N-PLS/RBL was also used in this work. In most of the studied cases iSPA-N-PLS proved to be a promising tool for selection of variables in second-order calibration, generating models with smaller RMSEP, when compared to both the global model using all of the sensors in two dimensions and GA-NPLS/RBL. PMID:24456589

  10. Particle filter-based track before detect algorithms

    NASA Astrophysics Data System (ADS)

    Boers, Yvo; Driessen, Hans

    2003-12-01

    In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.

  11. Particle filter-based track before detect algorithms

    NASA Astrophysics Data System (ADS)

    Boers, Yvo; Driessen, Hans

    2004-01-01

    In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.

  12. Algorithms for lineaments detection in processing of multispectral images

    NASA Astrophysics Data System (ADS)

    Borisova, D.; Jelev, G.; Atanassov, V.; Koprinkova-Hristova, Petia; Alexiev, K.

    2014-10-01

    Satellite remote sensing is a universal tool to investigate the different areas of Earth and environmental sciences. The advancement of the implementation capabilities of the optoelectronic devices which are long-term-tested in the laboratory and the field and are mounted on-board of the remote sensing platforms further improves the capability of instruments to acquire information about the Earth and its resources in global, regional and local scales. With the start of new high-spatial and spectral resolution satellite and aircraft imagery new applications for large-scale mapping and monitoring becomes possible. The integration with Geographic Information Systems (GIS) allows a synergistic processing of the multi-source spatial and spectral data. Here we present the results of a joint project DFNI I01/8 funded by the Bulgarian Science Fund focused on the algorithms of the preprocessing and the processing spectral data by using the methods of the corrections and of the visual and automatic interpretation. The objects of this study are lineaments. The lineaments are basically the line features on the earth's surface which are a sign of the geological structures. The geological lineaments usually appear on the multispectral images like lines or edges or linear shapes which is the result of the color variations of the surface structures. The basic geometry of a line is orientation, length and curve. The detection of the geological lineaments is an important operation in the exploration for mineral deposits, in the investigation of active fault patterns, in the prospecting of water resources, in the protecting people, etc. In this study the integrated approach for the detecting of the lineaments is applied. It combines together the methods of the visual interpretation of various geological and geographical indications in the multispectral satellite images, the application of the spatial analysis in GIS and the automatic processing of the multispectral images by Canny

  13. Cable Damage Detection System and Algorithms Using Time Domain Reflectometry

    SciTech Connect

    Clark, G A; Robbins, C L; Wade, K A; Souza, P R

    2009-03-24

    This report describes the hardware system and the set of algorithms we have developed for detecting damage in cables for the Advanced Development and Process Technologies (ADAPT) Program. This program is part of the W80 Life Extension Program (LEP). The system could be generalized for application to other systems in the future. Critical cables can undergo various types of damage (e.g. short circuits, open circuits, punctures, compression) that manifest as changes in the dielectric/impedance properties of the cables. For our specific problem, only one end of the cable is accessible, and no exemplars of actual damage are available. This work addresses the detection of dielectric/impedance anomalies in transient time domain reflectometry (TDR) measurements on the cables. The approach is to interrogate the cable using time domain reflectometry (TDR) techniques, in which a known pulse is inserted into the cable, and reflections from the cable are measured. The key operating principle is that any important cable damage will manifest itself as an electrical impedance discontinuity that can be measured in the TDR response signal. Machine learning classification algorithms are effectively eliminated from consideration, because only a small number of cables is available for testing; so a sufficient sample size is not attainable. Nonetheless, a key requirement is to achieve very high probability of detection and very low probability of false alarm. The approach is to compare TDR signals from possibly damaged cables to signals or an empirical model derived from reference cables that are known to be undamaged. This requires that the TDR signals are reasonably repeatable from test to test on the same cable, and from cable to cable. Empirical studies show that the repeatability issue is the 'long pole in the tent' for damage detection, because it is has been difficult to achieve reasonable repeatability. This one factor dominated the project. The two-step model-based approach is

  14. A Novel Algorithm for Cycle Slip Detection and Repair

    NASA Astrophysics Data System (ADS)

    Sezen, U.; Arikan, F.

    2012-04-01

    Accurate and reliable estimation of ionospheric parameters are very important for correct functioning of communication, navigation and positioning satellite systems. In recent years, dual-frequency GPS receivers are widely used for estimation of Total Electron Content (TEC), which is defined as the line integral of the electron density along a ray path. Since both electron density and TEC are functions of solar, geomagnetic, gravitational and seismic activity, any disturbance along the ray path can be detected using GPS receiver observables. It is observed that, with the development of recent sophisticated receivers, disruptions due to the receiver antenna, hardware or outside obstructions are minimized. Most of the observed sudden disturbances are signal phase lock losses due to ionosphere. These sudden phase shifts are named as cycle slips and if not corrected, they may lead to positioning errors or incorrect TEC estimates. There are many methods in the literature that deal with cycle slips and their repairs, yet these methods are not matured to detect all kinds of cycle slips. Most algorithms require double differencing, and/or complicated Kalman Filters, Wavelet transforms, Neural Network models, and integration of external INS systems. In this study, we propose a fast and efficient algorithm for identifying the cycle slips on individual observables, classifying them for future investigations and finally repairing them for more accurate and reliable TEC estimates. The algorithm traces the pseudorange and phase observables and computes the geometry free combinations of L4 and P4. The sudden disturbances on L1, L2, P1, C1 and P2 are classified and noted for further use. Most of the cases, the disruptions are on phase observables, yet for a few occasions, a sudden disturbance is also observed on pseudorange observables. The algorithm, then, checks the epoch section where P4 exists continually. When a disruption on L1 or L2 occurs, it becomes evident on L4. When P4

  15. Design of infrasound-detection system via adaptive LMSTDE algorithm

    NASA Technical Reports Server (NTRS)

    Khalaf, C. S.; Stoughton, J. W.

    1984-01-01

    A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.

  16. Jitter Estimation Algorithms for Detection of Pathological Voices

    NASA Astrophysics Data System (ADS)

    Silva, Dárcio G.; Oliveira, Luís C.; Andrea, Mário

    2009-12-01

    This work is focused on the evaluation of different methods to estimate the amount of jitter present in speech signals. The jitter value is a measure of the irregularity of a quasiperiodic signal and is a good indicator of the presence of pathologies in the larynx such as vocal fold nodules or a vocal fold polyp. Given the irregular nature of the speech signal, each jitter estimation algorithm relies on its own model making a direct comparison of the results very difficult. For this reason, the evaluation of the different jitter estimation methods was target on their ability to detect pathological voices. Two databases were used for this evaluation: a subset of the MEEI database and a smaller database acquired in the scope of this work. The results showed that there were significant differences in the performance of the algorithms being evaluated. Surprisingly, in the largest database the best results were not achieved with the commonly used relative jitter, measured as a percentage of the glottal cycle, but with absolute jitter values measured in microseconds. Also, the new proposed measure for jitter, LocJitt, performed in general is equal to or better than the commonly used tools of MDVP and Praat.

  17. On the Formal Verification of Conflict Detection Algorithms

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Butler, Ricky W.; Carreno, Victor A.; Dowek, Gilles

    2001-01-01

    Safety assessment of new air traffic management systems is a main issue for civil aviation authorities. Standard techniques such as testing and simulation have serious limitations in new systems that are significantly more autonomous than the older ones. In this paper, we present an innovative approach, based on formal verification, for establishing the correctness of conflict detection systems. Fundamental to our approach is the concept of trajectory, which is a continuous path in the x-y plane constrained by physical laws and operational requirements. From the Model of trajectories, we extract, and formally prove, high level properties that can serve as a framework to analyze conflict scenarios. We use the Airborne Information for Lateral Spacing (AILS) alerting algorithm as a case study of our approach.

  18. Automatic ultrasonic breast lesions detection using support vector machine based algorithm

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuang; Miao, Shan-Jung; Fan, Wei-Che; Chen, Yung-Sheng

    2007-03-01

    It is difficult to automatically detect tumors and extract lesion boundaries in ultrasound images due to the variance in shape, the interference from speckle noise, and the low contrast between objects and background. The enhancement of ultrasonic image becomes a significant task before performing lesion classification, which was usually done with manual delineation of the tumor boundaries in the previous works. In this study, a linear support vector machine (SVM) based algorithm is proposed for ultrasound breast image training and classification. Then a disk expansion algorithm is applied for automatically detecting lesions boundary. A set of sub-images including smooth and irregular boundaries in tumor objects and those in speckle-noised background are trained by the SVM algorithm to produce an optimal classification function. Based on this classification model, each pixel within an ultrasound image is classified into either object or background oriented pixel. This enhanced binary image can highlight the object and suppress the speckle noise; and it can be regarded as degraded paint character (DPC) image containing closure noise, which is well known in perceptual organization of psychology. An effective scheme of removing closure noise using iterative disk expansion method has been successfully demonstrated in our previous works. The boundary detection of ultrasonic breast lesions can be further equivalent to the removal of speckle noise. By applying the disk expansion method to the binary image, we can obtain a significant radius-based image where the radius for each pixel represents the corresponding disk covering the specific object information. Finally, a signal transmission process is used for searching the complete breast lesion region and thus the desired lesion boundary can be effectively and automatically determined. Our algorithm can be performed iteratively until all desired objects are detected. Simulations and clinical images were introduced to

  19. The Shortwave (SW) Clear-Sky Detection and Fitting Algorithm: Algorithm Operational Details and Explanations

    SciTech Connect

    Long, CN; Gaustad, KL

    2004-01-31

    This document describes some specifics of the algorithm for detecting clear skies and fitting clear-sky shortwave (SW) functions described in Long and Ackerman (2000). This algorithm forms the basis of the ARM SW FLUX ANAL 1Long VAP. In the Atmospheric Radiation Measurement (ARM) case, the value added procedures (VAP) can be described as having three parts: a “front end,” a “black box,” and a “back end.” The “front end” handles the file management of the processing, what range of data files to process in the run, which configuration file to use for each site, extracting the data from the ARM NetCDF files into an ASCII format for the code to process, etc. The “back end” produces ARM-format NetCDF files of the output and other file management. The “black box” is the processing code(s), and is what is discussed in this document. Details on the “front” and “back” ends of the ARM VAP are presented elsewhere.

  20. Motion mode recognition and step detection algorithms for mobile phone users.

    PubMed

    Susi, Melania; Renaudin, Valérie; Lachapelle, Gérard

    2013-01-24

    Microelectromechanical Systems (MEMS) technology is playing a key role in the design of the new generation of smartphones. Thanks to their reduced size, reduced power consumption, MEMS sensors can be embedded in above mobile devices for increasing their functionalities. However, MEMS cannot allow accurate autonomous location without external updates, e.g., from GPS signals, since their signals are degraded by various errors. When these sensors are fixed on the user's foot, the stance phases of the foot can easily be determined and periodic Zero velocity UPdaTes (ZUPTs) are performed to bound the position error. When the sensor is in the hand, the situation becomes much more complex. First of all, the hand motion can be decoupled from the general motion of the user. Second, the characteristics of the inertial signals can differ depending on the carrying modes. Therefore, algorithms for characterizing the gait cycle of a pedestrian using a handheld device have been developed. A classifier able to detect motion modes typical for mobile phone users has been designed and implemented. According to the detected motion mode, adaptive step detection algorithms are applied. Success of the step detection process is found to be higher than 97% in all motion modes.

  1. Adaptive switching detection algorithm for iterative-MIMO systems to enable power savings

    NASA Astrophysics Data System (ADS)

    Tadza, N.; Laurenson, D.; Thompson, J. S.

    2014-11-01

    This paper attempts to tackle one of the challenges faced in soft input soft output Multiple Input Multiple Output (MIMO) detection systems, which is to achieve optimal error rate performance with minimal power consumption. This is realized by proposing a new algorithm design that comprises multiple thresholds within the detector that, in real time, specify the receiver behavior according to the current channel in both slow and fast fading conditions, giving it adaptivity. This adaptivity enables energy savings within the system since the receiver chooses whether to accept or to reject the transmission, according to the success rate of detecting thresholds. The thresholds are calculated using the mutual information of the instantaneous channel conditions between the transmitting and receiving antennas of iterative-MIMO systems. In addition, the power saving technique, Dynamic Voltage and Frequency Scaling, helps to reduce the circuit power demands of the adaptive algorithm. This adaptivity has the potential to save up to 30% of the total energy when it is implemented on Xilinx®Virtex-5 simulation hardware. Results indicate the benefits of having this "intelligence" in the adaptive algorithm due to the promising performance-complexity tradeoff parameters in both software and hardware codesign simulation.

  2. A frameshift error detection algorithm for DNA sequencing projects.

    PubMed Central

    Fichant, G A; Quentin, Y

    1995-01-01

    During the determination of DNA sequences, frameshift errors are not the most frequent but they are the most bothersome as they corrupt the amino acid sequence over several residues. Detection of such errors by sequence alignment is only possible when related sequences are found in the databases. To avoid this limitation, we have developed a new tool based on the distribution of non-overlapping 3-tuples or 6-tuples in the three frames of an ORF. The method relies upon the result of a correspondence analysis. It has been extensively tested on Bacillus subtilis and Saccharomyces cerevisiae sequences and has also been examined with human sequences. The results indicate that it can detect frameshift errors affecting as few as 20 bp with a low rate of false positives (no more than 1.0/1000 bp scanned). The proposed algorithm can be used to scan a large collection of data, but it is mainly intended for laboratory practice as a tool for checking the quality of the sequences produced during a sequencing project. PMID:7659513

  3. SURF IA Conflict Detection and Resolution Algorithm Evaluation

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Barker, Glover D.

    2012-01-01

    The Enhanced Traffic Situational Awareness on the Airport Surface with Indications and Alerts (SURF IA) algorithm was evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. SURF IA is designed to increase flight crew situation awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the SURF IA algorithm under various runway scenarios, multiple levels of conflict detection and resolution (CD&R) system equipage, and various levels of horizontal position accuracy. This paper gives an overview of the SURF IA concept, simulation study, and results. Runway incursions are a serious aviation safety hazard. As such, the FAA is committed to reducing the severity, number, and rate of runway incursions by implementing a combination of guidance, education, outreach, training, technology, infrastructure, and risk identification and mitigation initiatives [1]. Progress has been made in reducing the number of serious incursions - from a high of 67 in Fiscal Year (FY) 2000 to 6 in FY2010. However, the rate of all incursions has risen steadily over recent years - from a rate of 12.3 incursions per million operations in FY2005 to a rate of 18.9 incursions per million operations in FY2010 [1, 2]. The National Transportation Safety Board (NTSB) also considers runway incursions to be a serious aviation safety hazard, listing runway incursion prevention as one of their most wanted transportation safety improvements [3]. The NTSB recommends that immediate warning of probable collisions/incursions be given directly to flight crews in the cockpit [4].

  4. The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.

  5. How Small Can Impact Craters Be Detected at Large Scale by Automated Algorithms?

    NASA Astrophysics Data System (ADS)

    Bandeira, L.; Machado, M.; Pina, P.; Marques, J. S.

    2013-12-01

    intended to be detected: the lower this limit is, the higher the false detection rates are. A detailed evaluation is performed with breakdown results by crater dimension and image or surface type, permitting to realize that automated detections in large crater datasets in HiRISE imagery datasets with 25cm/pixel resolution can be successfully done (high correct and low false positive detections) until a crater dimension of about 8-10 m or 32-40 pixels. [1] Martins L, Pina P. Marques JS, Silveira M, 2009, Crater detection by a boosting approach. IEEE Geoscience and Remote Sensing Letters 6: 127-131. [2] Salamuniccar G, Loncaric S, Pina P. Bandeira L., Saraiva J, 2011, MA130301GT catalogue of Martian impact craters and advanced evaluation of crater detection algorithms using diverse topography and image datasets. Planetary and Space Science 59: 111-131. [3] Bandeira L, Ding W, Stepinski T, 2012, Detection of sub-kilometer craters in high resolution planetary images using shape and texture features. Advances in Space Research 49: 64-74.

  6. A stereo-vision hazard-detection algorithm to increase planetary lander autonomy

    NASA Astrophysics Data System (ADS)

    Woicke, Svenja; Mooij, Erwin

    2016-05-01

    For future landings on any celestial body, increasing the lander autonomy as well as decreasing risk are primary objectives. Both risk reduction and an increase in autonomy can be achieved by including hazard detection and avoidance in the guidance, navigation, and control loop. One of the main challenges in hazard detection and avoidance is the reconstruction of accurate elevation models, as well as slope and roughness maps. Multiple methods for acquiring the inputs for hazard maps are available. The main distinction can be made between active and passive methods. Passive methods (cameras) have budgetary advantages compared to active sensors (radar, light detection and ranging). However, it is necessary to proof that these methods deliver sufficiently good maps. Therefore, this paper discusses hazard detection using stereo vision. To facilitate a successful landing not more than 1% wrong detections (hazards that are not identified) are allowed. Based on a sensitivity analysis it was found that using a stereo set-up at a baseline of ≤ 2 m is feasible at altitudes of ≤ 200 m defining false positives of less than 1%. It was thus shown that stereo-based hazard detection is an effective means to decrease the landing risk and increase the lander autonomy. In conclusion, the proposed algorithm is a promising candidate for future landers.

  7. Detection and clustering of features in aerial images by neuron network-based algorithm

    NASA Astrophysics Data System (ADS)

    Vozenilek, Vit

    2015-12-01

    The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.

  8. The infrared moving object detection and security detection related algorithms based on W4 and frame difference

    NASA Astrophysics Data System (ADS)

    Yin, Jiale; Liu, Lei; Li, He; Liu, Qiankun

    2016-07-01

    This paper presents the infrared moving object detection and security detection related algorithms in video surveillance based on the classical W4 and frame difference algorithm. Classical W4 algorithm is one of the powerful background subtraction algorithms applying to infrared images which can accurately, integrally and quickly detect moving object. However, the classical W4 algorithm can only overcome the deficiency in the slight movement of background. The error will become bigger and bigger for long-term surveillance system since the background model is unchanged once established. In this paper, we present the detection algorithm based on the classical W4 and frame difference. It cannot only overcome the shortcoming of falsely detecting because of state mutations from background, but also eliminate holes caused by frame difference. Based on these we further design various security detection related algorithms such as illegal intrusion alarm, illegal persistence alarm and illegal displacement alarm. We compare our method with the classical W4, frame difference, and other state-of-the-art methods. Experiments detailed in this paper show the method proposed in this paper outperforms the classical W4 and frame difference and serves well for the security detection related algorithms.

  9. Time series change detection: Algorithms for land cover change

    NASA Astrophysics Data System (ADS)

    Boriah, Shyam

    can be used for decision making and policy planning purposes. In particular, previous change detection studies have primarily relied on examining differences between two or more satellite images acquired on different dates. Thus, a technological solution that detects global land cover change using high temporal resolution time series data will represent a paradigm-shift in the field of land cover change studies. To realize these ambitious goals, a number of computational challenges in spatio-temporal data mining need to be addressed. Specifically, analysis and discovery approaches need to be cognizant of climate and ecosystem data characteristics such as seasonality, non-stationarity/inter-region variability, multi-scale nature, spatio-temporal autocorrelation, high-dimensionality and massive data size. This dissertation, a step in that direction, translates earth science challenges to computer science problems, and provides computational solutions to address these problems. In particular, three key technical capabilities are developed: (1) Algorithms for time series change detection that are effective and can scale up to handle the large size of earth science data; (2) Change detection algorithms that can handle large numbers of missing and noisy values present in satellite data sets; and (3) Spatio-temporal analysis techniques to identify the scale and scope of disturbance events.

  10. Airport Traffic Conflict Detection and Resolution Algorithm Evaluation

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Otero, Sharon D.; Barker, Glover D.

    2012-01-01

    A conflict detection and resolution (CD&R) concept for the terminal maneuvering area (TMA) was evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. The CD&R concept is being designed to enhance surface situation awareness and provide cockpit alerts of potential conflicts during runway, taxi, and low altitude air-to-air operations. The purpose of the study was to evaluate the performance of aircraft-based CD&R algorithms in the TMA, as a function of surveillance accuracy. This paper gives an overview of the CD&R concept, simulation study, and results. The Next Generation Air Transportation System (NextGen) concept for the year 2025 and beyond envisions the movement of large numbers of people and goods in a safe, efficient, and reliable manner [1]. NextGen will remove many of the constraints in the current air transportation system, support a wider range of operations, and provide an overall system capacity up to three times that of current operating levels. Emerging NextGen operational concepts [2], such as four-dimensional trajectory based airborne and surface operations, equivalent visual operations, and super density arrival and departure operations, require a different approach to air traffic management and as a result, a dramatic shift in the tasks, roles, and responsibilities for the flight deck and air traffic control (ATC) to ensure a safe, sustainable air transportation system.

  11. A coincidence detection algorithm for improving detection rates in coulomb explosion imaging

    NASA Astrophysics Data System (ADS)

    Wales, Benji; Bisson, Eric; Karimi, Reza; Kieffer, Jean-Claude; Légaré, Francois; Sanderson, Joseph

    2012-03-01

    A scheme for determining true coincidence events in Coulomb Explosion Imaging experiments is reported and compared with a simple design used in recently published work. The new scheme is able to identify any possible coincidence without the use of a priori knowledge of the fragmentation mechanism. Using experimental data from the triatomic molecule OCS, the advanced algorithm is shown to improve acquisition yield by a factor of between 2 and 6 depending on the amount of a priori knowledge included in the simple design search. Monte Carlo simulations for both systems suggest that detection yield can be improved by increasing the number of molecules in the laser focus from the standard ≤1 up to 3.5 and employing the advanced algorithm. Count rates for larger molecules would be preferentially improved with the rate for 6 atom molecules improved by a factor of up to five.

  12. Asymmetric intimacy and algorithm for detecting communities in bipartite networks

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Qin, Xiaomeng

    2016-11-01

    In this paper, an algorithm to choose a good partition in bipartite networks has been proposed. Bipartite networks have more theoretical significance and broader prospect of application. In view of distinctive structure of bipartite networks, in our method, two parameters are defined to show the relationships between the same type nodes and heterogeneous nodes respectively. Moreover, our algorithm employs a new method of finding and expanding the core communities in bipartite networks. Two kinds of nodes are handled separately and merged, and then the sub-communities are obtained. After that, objective communities will be found according to the merging rule. The proposed algorithm has been simulated in real-world networks and artificial networks, and the result verifies the accuracy and reliability of the parameters on intimacy for our algorithm. Eventually, comparisons with similar algorithms depict that the proposed algorithm has better performance.

  13. Algorithms for Detecting Significantly Mutated Pathways in Cancer

    NASA Astrophysics Data System (ADS)

    Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.

    Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We

  14. Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry

    2005-01-01

    Reliable detection of non-traversable hazards is a key requirement for off-road autonomous navigation. A detailed description of each obstacle detection algorithm and their performance on the surveyed obstacle course is presented in this paper.

  15. Algorithm of weak edge detection based on the Nilpotent minimum fusion

    NASA Astrophysics Data System (ADS)

    Sun, Genyun; Zhang, Aizhu; Han, Xujun

    2011-11-01

    To overcome the shortcoming in traditional edge detection, such as the losing of weak edges and the too rough detected edges, a new edge detection method is proposed in this paper. The new algorithm is based on the Nilpotent minimum fusion. First of all, based on the space fuzzy relation of weak edges, the algorithm makes decision fusion to improve the structure of weak edges by using the Nilpotent minimum operator. Secondly, detect edges based on the fusion results. As a result, the weak edges are detected. Experiments on a variety of weak edge images show that the new algorithm can actually overcome the shortcoming in traditional edge detection, for the results are much better than traditional methods. On one hand, some of the weak edges of complex images, such as medical images, are detected. On the other hand, the edges detected by the new algorithm are thinner.

  16. Comparative study of adaptive-noise-cancellation algorithms for intrusion detection systems

    SciTech Connect

    Claassen, J.P.; Patterson, M.M.

    1981-01-01

    Some intrusion detection systems are susceptible to nonstationary noise resulting in frequent nuisance alarms and poor detection when the noise is present. Adaptive inverse filtering for single channel systems and adaptive noise cancellation for two channel systems have both demonstrated good potential in removing correlated noise components prior detection. For such noise susceptible systems the suitability of a noise reduction algorithm must be established in a trade-off study weighing algorithm complexity against performance. The performance characteristics of several distinct classes of algorithms are established through comparative computer studies using real signals. The relative merits of the different algorithms are discussed in the light of the nature of intruder and noise signals.

  17. Research of adaptive threshold edge detection algorithm based on statistics canny operator

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Wang, Huaisuo; Huang, Hua

    2015-12-01

    The traditional Canny operator cannot get the optimal threshold in different scene, on this foundation, an improved Canny edge detection algorithm based on adaptive threshold is proposed. The result of the experiment pictures indicate that the improved algorithm can get responsible threshold, and has the better accuracy and precision in the edge detection.

  18. Fault Detection of Roller-Bearings Using Signal Processing and Optimization Algorithms

    PubMed Central

    Kwak, Dae-Ho; Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2014-01-01

    This study presents a fault detection of roller bearings through signal processing and optimization techniques. After the occurrence of scratch-type defects on the inner race of bearings, variations of kurtosis values are investigated in terms of two different data processing techniques: minimum entropy deconvolution (MED), and the Teager-Kaiser Energy Operator (TKEO). MED and the TKEO are employed to qualitatively enhance the discrimination of defect-induced repeating peaks on bearing vibration data with measurement noise. Given the perspective of the execution sequence of MED and the TKEO, the study found that the kurtosis sensitivity towards a defect on bearings could be highly improved. Also, the vibration signal from both healthy and damaged bearings is decomposed into multiple intrinsic mode functions (IMFs), through empirical mode decomposition (EMD). The weight vectors of IMFs become design variables for a genetic algorithm (GA). The weights of each IMF can be optimized through the genetic algorithm, to enhance the sensitivity of kurtosis on damaged bearing signals. Experimental results show that the EMD-GA approach successfully improved the resolution of detectability between a roller bearing with defect, and an intact system. PMID:24368701

  19. Successive smoothing algorithm for constructing the semiempirical model developed at ONERA to predict unsteady aerodynamic forces. [aeroelasticity in helicopters

    NASA Technical Reports Server (NTRS)

    Petot, D.; Loiseau, H.

    1982-01-01

    Unsteady aerodynamic methods adopted for the study of aeroelasticity in helicopters are considered with focus on the development of a semiempirical model of unsteady aerodynamic forces acting on an oscillating profile at high incidence. The successive smoothing algorithm described leads to the model's coefficients in a very satisfactory manner.

  20. Face detection in complex background based on Adaboost algorithm and YCbCr skin color model

    NASA Astrophysics Data System (ADS)

    Ge, Wei; Han, Chunling; Quan, Wei

    2015-12-01

    Face detection is a fundamental and important research theme in the topic of Pattern Recognition and Computer Vision. Now, remarkable fruits have been achieved. Among these methods, statistics based methods hold a dominant position. In this paper, Adaboost algorithm based on Haar-like features is used to detect faces in complex background. The method combining YCbCr skin model detection and Adaboost is researched, the skin detection method is used to validate the detection results obtained by Adaboost algorithm. It overcomes false detection problem by Adaboost. Experimental results show that nearly all non-face areas are removed, and improve the detection rate.

  1. Distributed learning automata-based algorithm for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Khomami, Mohammad Mehdi Daliri; Rezvanian, Alireza; Meybodi, Mohammad Reza

    2016-03-01

    Community structure is an important and universal topological property of many complex networks such as social and information networks. The detection of communities of a network is a significant technique for understanding the structure and function of networks. In this paper, we propose an algorithm based on distributed learning automata for community detection (DLACD) in complex networks. In the proposed algorithm, each vertex of network is equipped with a learning automation. According to the cooperation among network of learning automata and updating action probabilities of each automaton, the algorithm interactively tries to identify high-density local communities. The performance of the proposed algorithm is investigated through a number of simulations on popular synthetic and real networks. Experimental results in comparison with popular community detection algorithms such as walk trap, Danon greedy optimization, Fuzzy community detection, Multi-resolution community detection and label propagation demonstrated the superiority of DLACD in terms of modularity, NMI, performance, min-max-cut and coverage.

  2. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    PubMed Central

    Liu, Pengyu; Jia, Kebin

    2013-01-01

    A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495

  3. Contaminant detection on poultry carcasses using hyperspectral data: Part I. Algorithms for selection of individual wavebands

    NASA Astrophysics Data System (ADS)

    Nakariyakul, Songyot; Casasent, David P.

    2007-09-01

    Contaminant detection on chicken carcasses is an important product inspection application. The four contaminant types of interest contain three types of feces from different gastrointestinal regions (duodenum, ceca, and colon) and ingesta (undigested food) from the gizzard. Use of automated or semi-automated inspection systems for detecting fecal contaminant regions is of great interest. Hyperspectral data provided by ARS (Athens, GA) were used to examine detection of contaminants on carcasses. We address quasi-optimal algorithms for selecting a set of spectral bands (wavelengths) in hyperspectral data for on-line contaminant detection (feature selection). We introduce our new improved forward floating selection (IFFS) algorithm and compare its performance to that of other state-of-the-art feature selection algorithms. Our initial results indicate that our method gives an excellent detection rate and performs better than other feature selection algorithms. We also show that combination feature selection algorithms perform worse.

  4. Parallel S{sub n} Sweeps on Unstructured Grids: Algorithms for Prioritization, Grid Partitioning, and Cycle Detection

    SciTech Connect

    Plimpton, Steven J.; Hendrickson, Bruce; Burns, Shawn P.; McLendon, William III; Rauchwerger, Lawrence

    2005-07-15

    The method of discrete ordinates is commonly used to solve the Boltzmann transport equation. The solution in each ordinate direction is most efficiently computed by sweeping the radiation flux across the computational grid. For unstructured grids this poses many challenges, particularly when implemented on distributed-memory parallel machines where the grid geometry is spread across processors. We present several algorithms relevant to this approach: (a) an asynchronous message-passing algorithm that performs sweeps simultaneously in multiple ordinate directions, (b) a simple geometric heuristic to prioritize the computational tasks that a processor works on, (c) a partitioning algorithm that creates columnar-style decompositions for unstructured grids, and (d) an algorithm for detecting and eliminating cycles that sometimes exist in unstructured grids and can prevent sweeps from successfully completing. Algorithms (a) and (d) are fully parallel; algorithms (b) and (c) can be used in conjunction with (a) to achieve higher parallel efficiencies. We describe our message-passing implementations of these algorithms within a radiation transport package. Performance and scalability results are given for unstructured grids with up to 3 million elements (500 million unknowns) running on thousands of processors of Sandia National Laboratories' Intel Tflops machine and DEC-Alpha CPlant cluster.

  5. Detection of Local/Regional Events in Kuwait Using Next-Generation Detection Algorithms

    SciTech Connect

    Gok, M. Rengin; Al-Jerri, Farra; Dodge, Douglas; Al-Enezi, Abdullah; Hauk, Terri; Mellors, R.

    2014-12-10

    Seismic networks around the world use conventional triggering algorithms to detect seismic signals in order to locate local/regional seismic events. Kuwait National Seismological Network (KNSN) of Kuwait Institute of Scientific Research (KISR) is operating seven broad-band and short-period three-component stations in Kuwait. The network is equipped with Nanometrics digitizers and uses Antelope and Guralp acquisition software for processing and archiving the data. In this study, we selected 10 days of archived hourly-segmented continuous data of five stations (Figure 1) and 250 days of continuous recording at MIB. For the temporary deployment our selection criteria was based on KNSN catalog intensity for the period of time we test the method. An autonomous event detection and clustering framework is employed to test a more complete catalog of this short period of time. The goal is to illustrate the effectiveness of the technique and pursue the framework for longer period of time.

  6. Characterizing interplanetary shocks for development and optimization of an automated solar wind shock detection algorithm

    NASA Astrophysics Data System (ADS)

    Cash, M. D.; Wrobel, J. S.; Cosentino, K. C.; Reinard, A. A.

    2014-06-01

    Human evaluation of solar wind data for interplanetary (IP) shock identification relies on both heuristics and pattern recognition, with the former lending itself to algorithmic representation and automation. Such detection algorithms can potentially alert forecasters of approaching shocks, providing increased warning of subsequent geomagnetic storms. However, capturing shocks with an algorithmic treatment alone is challenging, as past and present work demonstrates. We present a statistical analysis of 209 IP shocks observed at L1, and we use this information to optimize a set of shock identification criteria for use with an automated solar wind shock detection algorithm. In order to specify ranges for the threshold values used in our algorithm, we quantify discontinuities in the solar wind density, velocity, temperature, and magnetic field magnitude by analyzing 8 years of IP shocks detected by the SWEPAM and MAG instruments aboard the ACE spacecraft. Although automatic shock detection algorithms have previously been developed, in this paper we conduct a methodical optimization to refine shock identification criteria and present the optimal performance of this and similar approaches. We compute forecast skill scores for over 10,000 permutations of our shock detection criteria in order to identify the set of threshold values that yield optimal forecast skill scores. We then compare our results to previous automatic shock detection algorithms using a standard data set, and our optimized algorithm shows improvements in the reliability of automated shock detection.

  7. A biomimetic algorithm for the improved detection of microarray features

    NASA Astrophysics Data System (ADS)

    Nicolau, Dan V., Jr.; Nicolau, Dan V.; Maini, Philip K.

    2007-02-01

    One the major difficulties of microarray technology relate to the processing of large and - importantly - error-loaded images of the dots on the chip surface. Whatever the source of these errors, those obtained in the first stage of data acquisition - segmentation - are passed down to the subsequent processes, with deleterious results. As it has been demonstrated recently that biological systems have evolved algorithms that are mathematically efficient, this contribution attempts to test an algorithm that mimics a bacterial-"patented" algorithm for the search of available space and nutrients to find, "zero-in" and eventually delimitate the features existent on the microarray surface.

  8. Rocketdyne Safety Algorithm: Space Shuttle Main Engine Fault Detection

    NASA Technical Reports Server (NTRS)

    Norman, Arnold M., Jr.

    1994-01-01

    The Rocketdyne Safety Algorithm (RSA) has been developed to the point of use on the TTBE at MSFC on Task 4 of LeRC contract NAS3-25884. This document contains a description of the work performed, the results of the nominal test of the major anomaly test cases and a table of the resulting cutoff times, a plot of the RSA value vs. time for each anomaly case, a logic flow description of the algorithm, the algorithm code, and a development plan for future efforts.

  9. Parallelization of exoplanets detection algorithms based on field rotation; example of the MOODS algorithm for SPHERE

    NASA Astrophysics Data System (ADS)

    Mattei, D.; Smith, I.; Ferrari, A.; Carbillet, M.

    2010-10-01

    Post-processing for exoplanet detection using direct imaging requires large data cubes and/or sophisticated signal processing technics. For alt-azimuthal mounts, a projection effect called field rotation makes the potential planet rotate in a known manner on the set of images. For ground based telescopes that use extreme adaptive optics and advanced coronagraphy, technics based on field rotation are already broadly used and still under progress. In most such technics, for a given initial position of the planet the planet intensity estimate is a linear function of the set of images. However, due to field rotation the modified instrumental response applied is not shift invariant like usual linear filters. Testing all possible initial positions is therefore very time-consuming. To reduce the time process, we propose to deal with each subset of initial positions computed on a different machine using parallelization programming. In particular, the MOODS algorithm dedicated to the VLT-SPHERE instrument, that estimates jointly the light contributions of the star and the potential exoplanet, is parallelized on the Observatoire de la Cote d'Azur cluster. Different parallelization methods (OpenMP, MPI, Jobs Array) have been elaborated for the initial MOODS code and compared to each other. The one finally chosen splits the initial positions on the processors available by accounting at best for the different constraints of the cluster structure: memory, job submission queues, number of available CPUs, cluster average load. At the end, a standard set of images is satisfactorily processed in a few hours instead of a few days.

  10. A novel algorithm for detecting multiple covariance and clustering of biological sequences

    PubMed Central

    Shen, Wei; Li, Yan

    2016-01-01

    Single genetic mutations are always followed by a set of compensatory mutations. Thus, multiple changes commonly occur in biological sequences and play crucial roles in maintaining conformational and functional stability. Although many methods are available to detect single mutations or covariant pairs, detecting non-synchronous multiple changes at different sites in sequences remains challenging. Here, we develop a novel algorithm, named Fastcov, to identify multiple correlated changes in biological sequences using an independent pair model followed by a tandem model of site-residue elements based on inter-restriction thinking. Fastcov performed exceptionally well at harvesting co-pairs and detecting multiple covariant patterns. By 10-fold cross-validation using datasets of different scales, the characteristic patterns successfully classified the sequences into target groups with an accuracy of greater than 98%. Moreover, we demonstrated that the multiple covariant patterns represent co-evolutionary modes corresponding to the phylogenetic tree, and provide a new understanding of protein structural stability. In contrast to other methods, Fastcov provides not only a reliable and effective approach to identify covariant pairs but also more powerful functions, including multiple covariance detection and sequence classification, that are most useful for studying the point and compensatory mutations caused by natural selection, drug induction, environmental pressure, etc. PMID:27451921

  11. AUTOMATIC DETECTION ALGORITHM OF DYNAMIC PRESSURE PULSES IN THE SOLAR WIND

    SciTech Connect

    Zuo, Pingbing; Feng, Xueshang; Wang, Yi; Xie, Yanqiong; Li, Huijun; Xu, Xiaojun E-mail: fengx@spaceweather.ac.cn

    2015-04-20

    Dynamic pressure pulses (DPPs) in the solar wind are a significant phenomenon closely related to the solar-terrestrial connection and physical processes of solar wind dynamics. In order to automatically identify DPPs from solar wind measurements, we develop a procedure with a three-step detection algorithm that is able to rapidly select DPPs from the plasma data stream and simultaneously define the transition region where large dynamic pressure variations occur and demarcate the upstream and downstream region by selecting the relatively quiet status before and after the abrupt change in dynamic pressure. To demonstrate the usefulness, efficiency, and accuracy of this procedure, we have applied it to the Wind observations from 1996 to 2008 by successfully obtaining the DPPs. The procedure can also be applied to other solar wind spacecraft observation data sets with different time resolutions.

  12. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGES

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  13. Stride search: A general algorithm for storm detection in high resolution climate data

    SciTech Connect

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.

  14. A general-purpose contact detection algorithm for nonlinear structural analysis codes

    SciTech Connect

    Heinstein, M.W.; Attaway, S.W.; Swegle, J.W.; Mello, F.J.

    1993-05-01

    A new contact detection algorithm has been developed to address difficulties associated with the numerical simulation of contact in nonlinear finite element structural analysis codes. Problems including accurate and efficient detection of contact for self-contacting surfaces, tearing and eroding surfaces, and multi-body impact are addressed. The proposed algorithm is portable between dynamic and quasi-static codes and can efficiently model contact between a variety of finite element types including shells, bricks, beams and particles. The algorithm is composed of (1) a location strategy that uses a global search to decide which slave nodes are in proximity to a master surface and (2) an accurate detailed contact check that uses the projected motions of both master surface and slave node. In this report, currently used contact detection algorithms and their associated difficulties are discussed. Then the proposed algorithm and how it addresses these problems is described. Finally, the capability of the new algorithm is illustrated with several example problems.

  15. A novel algorithm for real-time adaptive signal detection and identification

    SciTech Connect

    Sleefe, G.E.; Ladd, M.D.; Gallegos, D.E.; Sicking, C.W.; Erteza, I.A.

    1998-04-01

    This paper describes a novel digital signal processing algorithm for adaptively detecting and identifying signals buried in noise. The algorithm continually computes and updates the long-term statistics and spectral characteristics of the background noise. Using this noise model, a set of adaptive thresholds and matched digital filters are implemented to enhance and detect signals that are buried in the noise. The algorithm furthermore automatically suppresses coherent noise sources and adapts to time-varying signal conditions. Signal detection is performed in both the time-domain and the frequency-domain, thereby permitting the detection of both broad-band transients and narrow-band signals. The detection algorithm also provides for the computation of important signal features such as amplitude, timing, and phase information. Signal identification is achieved through a combination of frequency-domain template matching and spectral peak picking. The algorithm described herein is well suited for real-time implementation on digital signal processing hardware. This paper presents the theory of the adaptive algorithm, provides an algorithmic block diagram, and demonstrate its implementation and performance with real-world data. The computational efficiency of the algorithm is demonstrated through benchmarks on specific DSP hardware. The applications for this algorithm, which range from vibration analysis to real-time image processing, are also discussed.

  16. Real-time test of MOCS algorithm during Superflux 1980. [ocean color algorithm for remotely detecting suspended solids

    NASA Technical Reports Server (NTRS)

    Grew, G. W.

    1981-01-01

    A remote sensing experiment was conducted in which success depended upon the real-time use of an algorithm, generated from MOCS (multichannel ocean color sensor) data onboard the NASA P-3 aircraft, to direct the NOAA ship Kelez to oceanic stations where vitally needed sea truth could be collected. Remote data sets collected on two consecutive days of the mission were consistent with the sea truth for low concentrations of chlorophyll a. Two oceanic regions of special interest were located. The algorithm and the collected data are described.

  17. Distributed edge detection algorithm based on wavelet transform for wireless video sensor network

    NASA Astrophysics Data System (ADS)

    Li, Qiulin; Hao, Qun; Song, Yong; Wang, Dongsheng

    2010-12-01

    Edge detection algorithms are critical to image processing and computer vision. Traditional edge detection algorithms are not suitable for wireless video sensor network (WVSN) in which the nodes are with in limited calculation capability and resources. In this paper, a distributed edge detection algorithm based on wavelet transform designed for WVSN is proposed. Wavelet transform decompose the image into several parts, then the parts are assigned to different nodes through wireless network separately. Each node performs sub-image edge detecting algorithm correspondingly, all the results are sent to sink node, Fusing and Synthesis which include image binary and edge connect are executed in it. And finally output the edge image. Lifting scheme and parallel distributed algorithm are adopted to improve the efficiency, simultaneously, decrease the computational complexity. Experimental results show that this method could achieve higher efficiency and better result.

  18. Distributed edge detection algorithm based on wavelet transform for wireless video sensor network

    NASA Astrophysics Data System (ADS)

    Li, Qiulin; Hao, Qun; Song, Yong; Wang, Dongsheng

    2011-05-01

    Edge detection algorithms are critical to image processing and computer vision. Traditional edge detection algorithms are not suitable for wireless video sensor network (WVSN) in which the nodes are with in limited calculation capability and resources. In this paper, a distributed edge detection algorithm based on wavelet transform designed for WVSN is proposed. Wavelet transform decompose the image into several parts, then the parts are assigned to different nodes through wireless network separately. Each node performs sub-image edge detecting algorithm correspondingly, all the results are sent to sink node, Fusing and Synthesis which include image binary and edge connect are executed in it. And finally output the edge image. Lifting scheme and parallel distributed algorithm are adopted to improve the efficiency, simultaneously, decrease the computational complexity. Experimental results show that this method could achieve higher efficiency and better result.

  19. An infrared small target detection algorithm based on high-speed local contrast method

    NASA Astrophysics Data System (ADS)

    Cui, Zheng; Yang, Jingli; Jiang, Shouda; Li, Junbao

    2016-05-01

    Small-target detection in infrared imagery with a complex background is always an important task in remote sensing fields. It is important to improve the detection capabilities such as detection rate, false alarm rate, and speed. However, current algorithms usually improve one or two of the detection capabilities while sacrificing the other. In this letter, an Infrared (IR) small target detection algorithm with two layers inspired by Human Visual System (HVS) is proposed to balance those detection capabilities. The first layer uses high speed simplified local contrast method to select significant information. And the second layer uses machine learning classifier to separate targets from background clutters. Experimental results show the proposed algorithm pursue good performance in detection rate, false alarm rate and speed simultaneously.

  20. Scale-space point spread function based framework to boost infrared target detection algorithms

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2016-07-01

    Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.

  1. Design considerations for flight test of a fault inferring nonlinear detection system algorithm for avionics sensors

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Morrell, F. R.

    1986-01-01

    The modifications to the design of a fault inferring nonlinear detection system (FINDS) algorithm to accommodate flight computer constraints and the resulting impact on the algorithm performance are summarized. An overview of the flight data-driven FINDS algorithm is presented. This is followed by a brief analysis of the effects of modifications to the algorithm on program size and execution speed. Significant improvements in estimation performance for the aircraft states and normal operating sensor biases, which have resulted from improved noise design parameters and a new steady-state wind model, are documented. The aircraft state and sensor bias estimation performances of the algorithm's extended Kalman filter are presented as a function of update frequency of the piecewise constant filter gains. The results of a new detection system strategy and failure detection performance, as a function of gain update frequency, are also presented.

  2. [A Hyperspectral Imagery Anomaly Detection Algorithm Based on Gauss-Markov Model].

    PubMed

    Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo

    2015-10-01

    With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency.

  3. [A Hyperspectral Imagery Anomaly Detection Algorithm Based on Gauss-Markov Model].

    PubMed

    Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo

    2015-10-01

    With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency. PMID:26904830

  4. Multi-pattern string matching algorithms comparison for intrusion detection system

    NASA Astrophysics Data System (ADS)

    Hasan, Awsan A.; Rashid, Nur'Aini Abdul; Abdulrazzaq, Atheer A.

    2014-12-01

    Computer networks are developing exponentially and running at high speeds. With the increasing number of Internet users, computers have become the preferred target for complex attacks that require complex analyses to be detected. The Intrusion detection system (IDS) is created and turned into an important part of any modern network to protect the network from attacks. The IDS relies on string matching algorithms to identify network attacks, but these string matching algorithms consume a considerable amount of IDS processing time, thereby slows down the IDS performance. A new algorithm that can overcome the weakness of the IDS needs to be developed. Improving the multi-pattern matching algorithm ensure that an IDS can work properly and the limitations can be overcome. In this paper, we perform a comparison between our three multi-pattern matching algorithms; MP-KR, MPHQS and MPH-BMH with their corresponding original algorithms Kr, QS and BMH respectively. The experiments show that MPH-QS performs best among the proposed algorithms, followed by MPH-BMH, and MP-KR is the slowest. MPH-QS detects a large number of signature patterns in short time compared to other two algorithms. This finding can prove that the multi-pattern matching algorithms are more efficient in high-speed networks.

  5. Enhanced Detection of Multivariate Outliers Using Algorithm-Based Visual Display Techniques.

    ERIC Educational Resources Information Center

    Dickinson, Wendy B.

    This study uses an algorithm-based visual display technique (FACES) to provide enhanced detection of multivariate outliers within large-scale data sets. The FACES computer graphing algorithm (H. Chernoff, 1973) constructs a cartoon-like face, using up to 18 variables for each case. A major advantage of FACES is the ability to store and show the…

  6. Multispectral fluorescence image algorithms for detection of frass on mature tomatoes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at five wavebands, 515 nm, 640 nm, 664 nm, 690 nm, and 724 nm...

  7. Parallel contact detection algorithm for transient solid dynamics simulations using PRONTO3D

    SciTech Connect

    Attaway, S.W.; Hendrickson, B.A.; Plimpton, S.J.

    1996-09-01

    An efficient, scalable, parallel algorithm for treating material surface contacts in solid mechanics finite element programs has been implemented in a modular way for MIMD parallel computers. The serial contact detection algorithm that was developed previously for the transient dynamics finite element code PRONTO3D has been extended for use in parallel computation by devising a dynamic (adaptive) processor load balancing scheme.

  8. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1983-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation.

  9. Creation of an Accurate Algorithm to Detect Snellen Best Documented Visual Acuity from Ophthalmology Electronic Health Record Notes

    PubMed Central

    French, Dustin D; Gill, Manjot; Mitchell, Christopher; Jackson, Kathryn; Kho, Abel; Bryar, Paul J

    2016-01-01

    Background Visual acuity is the primary measure used in ophthalmology to determine how well a patient can see. Visual acuity for a single eye may be recorded in multiple ways for a single patient visit (eg, Snellen vs. Jäger units vs. font print size), and be recorded for either distance or near vision. Capturing the best documented visual acuity (BDVA) of each eye in an individual patient visit is an important step for making electronic ophthalmology clinical notes useful in research. Objective Currently, there is limited methodology for capturing BDVA in an efficient and accurate manner from electronic health record (EHR) notes. We developed an algorithm to detect BDVA for right and left eyes from defined fields within electronic ophthalmology clinical notes. Methods We designed an algorithm to detect the BDVA from defined fields within 295,218 ophthalmology clinical notes with visual acuity data present. About 5668 unique responses were identified and an algorithm was developed to map all of the unique responses to a structured list of Snellen visual acuities. Results Visual acuity was captured from a total of 295,218 ophthalmology clinical notes during the study dates. The algorithm identified all visual acuities in the defined visual acuity section for each eye and returned a single BDVA for each eye. A clinician chart review of 100 random patient notes showed a 99% accuracy detecting BDVA from these records and 1% observed error. Conclusions Our algorithm successfully captures best documented Snellen distance visual acuity from ophthalmology clinical notes and transforms a variety of inputs into a structured Snellen equivalent list. Our work, to the best of our knowledge, represents the first attempt at capturing visual acuity accurately from large numbers of electronic ophthalmology notes. Use of this algorithm can benefit research groups interested in assessing visual acuity for patient centered outcome. All codes used for this study are currently

  10. [Tachycardia detection in implantable cardioverter-defibrillators by Sorin/LivaNova : Algorithms, pearls and pitfalls].

    PubMed

    Kolb, Christof; Ocklenburg, Rolf

    2016-09-01

    For physicians involved in the treatment of patients with implantable cardioverter-defibrillators (ICDs) the knowledge of tachycardia detection algorithms is of paramount importance. This knowledge is essential for adequate device selection during de-novo implantation, ICD replacement, and for troubleshooting during follow-up. This review describes tachycardia detection algorithms incorporated in ICDs by Sorin/LivaNova and analyses their strengths and weaknesses.

  11. [Tachycardia detection in implantable cardioverter-defibrillators by Sorin/LivaNova : Algorithms, pearls and pitfalls].

    PubMed

    Kolb, Christof; Ocklenburg, Rolf

    2016-09-01

    For physicians involved in the treatment of patients with implantable cardioverter-defibrillators (ICDs) the knowledge of tachycardia detection algorithms is of paramount importance. This knowledge is essential for adequate device selection during de-novo implantation, ICD replacement, and for troubleshooting during follow-up. This review describes tachycardia detection algorithms incorporated in ICDs by Sorin/LivaNova and analyses their strengths and weaknesses. PMID:27605232

  12. RANSAC-based EM algorithm for robust detection and segmentation of cylindrical fragments from calibrated C-arm images

    NASA Astrophysics Data System (ADS)

    Zheng, Guoyan; Dong, Xiao; Zhang, Xuan

    2006-03-01

    Automated identification, pose and size estimation of cylindrical fragments from registered C-arm images is highly desirable in various computer-assisted, fluoroscopy-based applications including long bone fracture reduction and intramedullary nailing, where the pose and size of bone fragment need to be accurately estimated for a better treatment. In this paper, a RANSAC-based EM algorithm for robust detection and segmentation of cylindrical fragments from calibrated C-arm images is presented. By detection, we mean that the axes and the radii of the principal fragments will be automatically determined. And by segmentation, we mean that the contour of the fragment projection onto each image plane will be automatically extracted. Benefited from the cylindrical shape of the fragments, we formulate the detection problem as an optimal process for fitting parameterized three-dimensional (3D) cylinder model to images. A RANSAC-based EM algorithm is proposed to find the optimal solution by converting the fragment detection procedure to an iterative closest point (ICP) matching procedure. The outer projection boundary of the estimated cylinder model is then fed to a region-based active contour model to robustly extract the contour of the fragment projection. The proposed algorithm has been successfully applied to real patient data with/without external objects, yielding promising results.

  13. Comparison of human observer and algorithmic target detection in nonurban forward-looking infrared imagery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.

    2005-07-01

    We have performed an experiment that compares the performance of human observers with that of a robust algorithm for the detection of targets in difficult, nonurban forward-looking infrared imagery. Our purpose was to benchmark the comparison and document performance differences for future algorithm improvement. The scale-insensitive detection algorithm, used as a benchmark by the Night Vision Electronic Sensors Directorate for algorithm evaluation, employed a combination of contrastlike features to locate targets. Detection receiver operating characteristic curves and observer-confidence analyses were used to compare human and algorithmic responses and to gain insight into differences. The test database contained ground targets, in natural clutter, whose detectability, as judged by human observers, ranged from easy to very difficult. In general, as compared with human observers, the algorithm detected most of the same targets, but correlated confidence with correct detections poorly and produced many more false alarms at any useful level of performance. Though characterizing human performance was not the intent of this study, results suggest that previous observational experience was not a strong predictor of human performance, and that combining individual human observations by majority vote significantly reduced false-alarm rates.

  14. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

    1990-01-01

    Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

  15. The research of moving objects behavior detection and tracking algorithm in aerial video

    NASA Astrophysics Data System (ADS)

    Yang, Le-le; Li, Xin; Yang, Xiao-ping; Li, Dong-hui

    2015-12-01

    The article focuses on the research of moving target detection and tracking algorithm in Aerial monitoring. Study includes moving target detection, moving target behavioral analysis and Target Auto tracking. In moving target detection, the paper considering the characteristics of background subtraction and frame difference method, using background reconstruction method to accurately locate moving targets; in the analysis of the behavior of the moving object, using matlab technique shown in the binary image detection area, analyzing whether the moving objects invasion and invasion direction; In Auto Tracking moving target, A video tracking algorithm that used the prediction of object centroids based on Kalman filtering was proposed.

  16. An Automated Cloud-edge Detection Algorithm Using Cloud Physics and Radar Data

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.

    2003-01-01

    An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.

  17. Competitive evaluation of failure detection algorithms for strapdown redundant inertial instruments

    NASA Technical Reports Server (NTRS)

    Wilcox, J. C.

    1973-01-01

    Algorithms for failure detection, isolation, and correction of redundant inertial instruments in the strapdown dodecahedron configuration are competitively evaluated in a digital computer simulation that subjects them to identical environments. Their performance is compared in terms of orientation and inertial velocity errors and in terms of missed and false alarms. The algorithms appear in the simulation program in modular form, so that they may be readily extracted for use elsewhere. The simulation program and its inputs and outputs are described. The algorithms, along with an eight algorithm that was not simulated, also compared analytically to show the relationships among them.

  18. A real-time FORTRAN implementation of a sensor failure detection, isolation and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.

    1984-01-01

    An advanced, sensor failure detection, isolation, and accomodation algorithm has been developed by NASA for the F100 turbofan engine. The algorithm takes advantage of the analytical redundancy of the sensors to improve the reliability of the sensor set. The method requires the controls computer, to determine when a sensor failure has occurred without the help of redundant hardware sensors in the control system. The controls computer provides an estimate of the correct value of the output of the failed sensor. The algorithm has been programmed in FORTRAN using a real-time microprocessor-based controls computer. A detailed description of the algorithm and its implementation on a microprocessor is given.

  19. AsteroidZoo: A New Zooniverse project to detect asteroids and improve asteroid detection algorithms

    NASA Astrophysics Data System (ADS)

    Beasley, M.; Lewicki, C. A.; Smith, A.; Lintott, C.; Christensen, E.

    2013-12-01

    We present a new citizen science project: AsteroidZoo. A collaboration between Planetary Resources, Inc., the Zooniverse Team, and the Catalina Sky Survey, we will bring the science of asteroid identification to the citizen scientist. Volunteer astronomers have proved to be a critical asset in identification and characterization of asteroids, especially potentially hazardous objects. These contributions, to date, have required that the volunteer possess a moderate telescope and the ability and willingness to be responsive to observing requests. Our new project will use data collected by the Catalina Sky Survey (CSS), currently the most productive asteroid survey, to be used by anyone with sufficient interest and an internet connection. As previous work by the Zooniverse has demonstrated, the capability of the citizen scientist is superb at classification of objects. Even the best automated searches require human intervention to identify new objects. These searches are optimized to reduce false positive rates and to prevent a single operator from being overloaded with requests. With access to the large number of people in Zooniverse, we will be able to avoid that problem and instead work to produce a complete detection list. Each frame from CSS will be searched in detail, generating a large number of new detections. We will be able to evaluate the completeness of the CSS data set and potentially provide improvements to the automated pipeline. The data corpus produced by AsteroidZoo will be used as a training environment for machine learning challenges in the future. Our goals include a more complete asteroid detection algorithm and a minimum computation program that skims the cream of the data suitable for implemention on small spacecraft. Our goal is to have the site become live in the Fall 2013.

  20. RS slope detection algorithm for extraction of heart rate from noisy, multimodal recordings.

    PubMed

    Gierałtowski, Jan; Ciuchciński, Kamil; Grzegorczyk, Iga; Kośna, Katarzyna; Soliński, Mateusz; Podziemski, Piotr

    2015-08-01

    Current gold-standard algorithms for heart beat detection do not work properly in the case of high noise levels and do not make use of multichannel data collected by modern patient monitors. The main idea behind the method presented in this paper is to detect the most prominent part of the QRS complex, i.e. the RS slope. We localize the RS slope based on the consistency of its characteristics, i.e. adequate, automatically determined amplitude and duration. It is a very simple and non-standard, yet very effective, solution. Minor data pre-processing and parameter adaptations make our algorithm fast and noise-resistant. As one of a few algorithms in the PhysioNet/Computing in Cardiology Challenge 2014, our algorithm uses more than two channels (i.e. ECG, BP, EEG, EOG and EMG). Simple fundamental working rules make the algorithm universal: it is able to work on all of these channels with no or only little changes. The final result of our algorithm in phase III of the Challenge was 86.38 (88.07 for a 200 record test set), which gave us fourth place. Our algorithm shows that current standards for heart beat detection could be improved significantly by taking a multichannel approach. This is an open-source algorithm available through the PhysioNet library.

  1. A low-power fall detection algorithm based on triaxial acceleration and barometric pressure.

    PubMed

    Wang, Changhong; Narayanan, Michael R; Lord, Stephen R; Redmond, Stephen J; Lovell, Nigel H

    2014-01-01

    This paper proposes a low-power fall detection algorithm based on triaxial accelerometry and barometric pressure signals. The algorithm dynamically adjusts the sampling rate of an accelerometer and manages data transmission between sensors and a controller to reduce power consumption. The results of simulation show that the sensitivity and specificity of the proposed fall detection algorithm are both above 96% when applied to a previously collected dataset comprising 20 young actors performing a combination of simulated falls and activities of daily living. This level of performance can be achieved despite a 10.9% reduction in power consumption.

  2. A low-power fall detection algorithm based on triaxial acceleration and barometric pressure.

    PubMed

    Wang, Changhong; Narayanan, Michael R; Lord, Stephen R; Redmond, Stephen J; Lovell, Nigel H

    2014-01-01

    This paper proposes a low-power fall detection algorithm based on triaxial accelerometry and barometric pressure signals. The algorithm dynamically adjusts the sampling rate of an accelerometer and manages data transmission between sensors and a controller to reduce power consumption. The results of simulation show that the sensitivity and specificity of the proposed fall detection algorithm are both above 96% when applied to a previously collected dataset comprising 20 young actors performing a combination of simulated falls and activities of daily living. This level of performance can be achieved despite a 10.9% reduction in power consumption. PMID:25570023

  3. The EUSTACE break-detection algorithm for a global air temperature dataset

    NASA Astrophysics Data System (ADS)

    Brugnara, Yuri; Auchmann, Renate; Brönnimann, Stefan

    2016-04-01

    EUSTACE (EU Surface Temperature for All Corners of Earth) is an EU-funded project that has started in 2015; its goal is to produce daily estimates of surface air temperature since 1850 across the globe for the first time by combining surface and satellite data using novel statistical techniques. For land surface data (LSAT), we assembled a global dataset of ca. 35000 stations where daily maximum and minimum air temperature observations are available, taking advantage of the most recent data rescue initiatives. Beside quantity, data quality also plays an important role for the success of the project; in particular, the assessment of the homogeneity of the temperature series is crucial in order to obtain a product suitable for the study of climate change. This poster describes a fully automatic state-of-the-art break-detection algorithm that we developed for the global LSAT dataset. We evaluate the performance of the method using artificial benchmarks and present various statistics related to frequency and amplitude of the inhomogeneities detected in the real data. We show in particular that long-term temperature trends calculated from raw data are more often underestimated than overestimated and that this behaviour is mostly related to inhomogeneities affecting maximum temperatures.

  4. Combined Dust Detection Algorithm by Using MODIS Infrared Channels over East Asia

    NASA Technical Reports Server (NTRS)

    Park, Sang Seo; Kim, Jhoon; Lee, Jaehwa; Lee, Sukjo; Kim, Jeong Soo; Chang, Lim Seok; Ou, Steve

    2014-01-01

    A new dust detection algorithm is developed by combining the results of multiple dust detectionmethods using IR channels onboard the MODerate resolution Imaging Spectroradiometer (MODIS). Brightness Temperature Difference (BTD) between two wavelength channels has been used widely in previous dust detection methods. However, BTDmethods have limitations in identifying the offset values of the BTDto discriminate clear-sky areas. The current algorithm overcomes the disadvantages of previous dust detection methods by considering the Brightness Temperature Ratio (BTR) values of the dual wavelength channels with 30-day composite, the optical properties of the dust particles, the variability of surface properties, and the cloud contamination. Therefore, the current algorithm shows improvements in detecting the dust loaded region over land during daytime. Finally, the confidence index of the current dust algorithm is shown in 10 × 10 pixels of the MODIS observations. From January to June, 2006, the results of the current algorithm are within 64 to 81% of those found using the fine mode fraction (FMF) and aerosol index (AI) from the MODIS and Ozone Monitoring Instrument (OMI). The agreement between the results of the current algorithm and the OMI AI over the non-polluted land also ranges from 60 to 67% to avoid errors due to the anthropogenic aerosol. In addition, the developed algorithm shows statistically significant results at four AErosol RObotic NETwork (AERONET) sites in East Asia.

  5. A change detection algorithm for man-made objects based on remote sensing images

    NASA Astrophysics Data System (ADS)

    Wang, Wenwu; Cao, Zhiguo

    2011-12-01

    Radiometric difference, misregistration error and the determination of classification threshold for difference image seriously influenced the detection accuracy of traditional pixel-level change detection algorithms, and it is difficult to get the true changes of interest from various kinds of detected changes. Therefore, a novel change detection algorithm is proposed to detect changes of man-made objects in remote sensing images. Large-size images are divided into overlapping and multi-scale sub-images, and three kinds of multi-scale structural features (including interscale or intrascale features), such as central-shift moments, gradient-magnitude features, gradient-orientation features and line-length features are extracted by support vector machine (SVM) classification. Experimental results demonstrate the feasibility and effectiveness of the proposed algorithm.

  6. Evaluation of detection algorithms for perpendicular recording channels with intertrack interference

    NASA Astrophysics Data System (ADS)

    Tan, Weijun; Cruz, J. R.

    2005-02-01

    Channel detection algorithms for handling intertrack interference (ITI) in perpendicular magnetic recording channels are studied in this paper. The goal is to optimize channel detection to attain the best possible performance for a practical system. Two channel detection models, namely, the single-track model and the joint-track model are evaluated using information rate analysis as well as simulations. Numerical results show that joint-track detection may be needed when ITI is severe.

  7. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  8. Planet Detection Algorithms for the Terrestrial Planet Finder-C

    NASA Astrophysics Data System (ADS)

    Kasdin, N. J.; Braems, I.

    2005-12-01

    Critical to mission planning for the terrestrial planet finder coronagraph (TPF-C) is the ability to estimate integration times for planet detection. This detection is complicated by the presence of background noise due to local and exo-zodiacal dust, by residual speckle due optical errors, and by the dependence of the PSF shape on the specific coronagraph. In this paper we examine in detail the use of PSF fitting (matched filtering) for planet detection, derive probabilistic bounds for the signal-to-noise ratio by balancing missed detection and false alarm rates, and demonstrate that this is close to the optimal linear detection technique. We then compare to a Bayesian detection approach and show that for very low background the Bayesian method offers integration time improvements, but rapidly approaches the PSF fitting result for reasonable levels of background noise. We confirm via monte-carlo simulations. This work was supported under a grant from the Jet Propulsion Laboratory and by a fellowship from the Institut National de Recherche en Informatique et Automatique (INRIA).

  9. A group filter algorithm for sea mine detection

    NASA Astrophysics Data System (ADS)

    Cobb, J. Tory; An, Myoung; Tolimieri, Richard

    2005-06-01

    Automatic detection of sea mines in coastal regions is a difficult task due to the highly variable sea bottom conditions present in the underwater environment. Detection systems must be able to discriminate objects which vary in size, shape, and orientation from naturally occurring and man-made clutter. Additionally, these automated systems must be computationally efficient to be incorporated into unmanned underwater vehicle (UUV) sensor systems characterized by high sensor data rates and limited processing abilities. Using noncommutative group harmonic analysis, a fast, robust sea mine detection system is created. A family of unitary image transforms associated to noncommutative groups is generated and applied to side scan sonar image files supplied by Naval Surface Warfare Center Panama City (NSWC PC). These transforms project key image features, geometrically defined structures with orientations, and localized spectral information into distinct orthogonal components or feature subspaces of the image. The performance of the detection system is compared against the performance of an independent detection system in terms of probability of detection (Pd) and probability of false alarm (Pfa).

  10. MAP Support Detection for Greedy Sparse Signal Recovery Algorithms in Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Lee, Namyoon

    2016-10-01

    A reliable support detection is essential for a greedy algorithm to reconstruct a sparse signal accurately from compressed and noisy measurements. This paper proposes a novel support detection method for greedy algorithms, which is referred to as "\\textit{maximum a posteriori (MAP) support detection}". Unlike existing support detection methods that identify support indices with the largest correlation value in magnitude per iteration, the proposed method selects them with the largest likelihood ratios computed under the true and null support hypotheses by simultaneously exploiting the distributions of sensing matrix, sparse signal, and noise. Leveraging this technique, MAP-Matching Pursuit (MAP-MP) is first presented to show the advantages of exploiting the proposed support detection method, and a sufficient condition for perfect signal recovery is derived for the case when the sparse signal is binary. Subsequently, a set of iterative greedy algorithms, called MAP-generalized Orthogonal Matching Pursuit (MAP-gOMP), MAP-Compressive Sampling Matching Pursuit (MAP-CoSaMP), and MAP-Subspace Pursuit (MAP-SP) are presented to demonstrate the applicability of the proposed support detection method to existing greedy algorithms. From empirical results, it is shown that the proposed greedy algorithms with highly reliable support detection can be better, faster, and easier to implement than basis pursuit via linear programming.

  11. A small dim infrared maritime target detection algorithm based on local peak detection and pipeline-filtering

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Dong, Lili; Zhao, Ming; Xu, Wenhai

    2015-12-01

    In order to realize accurate detection for small dim infrared maritime target, this paper proposes a target detection algorithm based on local peak detection and pipeline-filtering. This method firstly extracts some suspected targets through local peak detection and removes most of non-target peaks with self-adaptive threshold process. And then pipeline-filtering is used to eliminate residual interferences so that only real target can be retained. The experiment results prove that this method has high performance on target detection, and its missing alarm rate and false alarm rate can basically meet practical requirements.

  12. An algorithm for image clusters detection and identification based on color for an autonomous mobile robot

    SciTech Connect

    Uy, D.L.

    1996-02-01

    An algorithm for detection and identification of image clusters or {open_quotes}blobs{close_quotes} based on color information for an autonomous mobile robot is developed. The input image data are first processed using a crisp color fuszzyfier, a binary smoothing filter, and a median filter. The processed image data is then inputed to the image clusters detection and identification program. The program employed the concept of {open_quotes}elastic rectangle{close_quotes}that stretches in such a way that the whole blob is finally enclosed in a rectangle. A C-program is develop to test the algorithm. The algorithm is tested only on image data of 8x8 sizes with different number of blobs in them. The algorithm works very in detecting and identifying image clusters.

  13. In-depth performance analysis of an EEG based neonatal seizure detection algorithm

    PubMed Central

    Mathieson, S.; Rennie, J.; Livingstone, V.; Temko, A.; Low, E.; Pressler, R.M.; Boylan, G.B.

    2016-01-01

    Objective To describe a novel neurophysiology based performance analysis of automated seizure detection algorithms for neonatal EEG to characterize features of detected and non-detected seizures and causes of false detections to identify areas for algorithmic improvement. Methods EEGs of 20 term neonates were recorded (10 seizure, 10 non-seizure). Seizures were annotated by an expert and characterized using a novel set of 10 criteria. ANSeR seizure detection algorithm (SDA) seizure annotations were compared to the expert to derive detected and non-detected seizures at three SDA sensitivity thresholds. Differences in seizure characteristics between groups were compared using univariate and multivariate analysis. False detections were characterized. Results The expert detected 421 seizures. The SDA at thresholds 0.4, 0.5, 0.6 detected 60%, 54% and 45% of seizures. At all thresholds, multivariate analyses demonstrated that the odds of detecting seizure increased with 4 criteria: seizure amplitude, duration, rhythmicity and number of EEG channels involved at seizure peak. Major causes of false detections included respiration and sweat artefacts or a highly rhythmic background, often during intermediate sleep. Conclusion This rigorous analysis allows estimation of how key seizure features are exploited by SDAs. Significance This study resulted in a beta version of ANSeR with significantly improved performance. PMID:27072097

  14. A novel algorithm for detection of precipitation in tropical regions using PMW radiometers

    NASA Astrophysics Data System (ADS)

    Casella, D.; Panegrossi, G.; Sanò, P.; Milani, L.; Petracca, M.; Dietrich, S.

    2015-03-01

    A novel algorithm for the detection of precipitation is described and tested. The algorithm is applicable to any modern passive microwave radiometer on board polar orbiting satellites independent of the observation geometry and channel frequency assortment. The algorithm is based on the application of canonical correlation analysis and on the definition of a threshold to be applied to the resulting linear combination of the brightness temperatures in all available channels. The algorithm has been developed using a 2-year data set of co-located Special Sensor Microwave Imager/Sounder (SSMIS) and Tropical Rainfall Measuring Mission precipitation radar (TRMM-PR) measurements and Advanced Microwave Sounding Unit (AMSU) Microwave Humidity Sounder and TRMM-PR measurements. This data set was partitioned into four classes depending on the background surface emissivity (vegetated land, arid land, ocean, and coast) with the same procedure applied for each surface class. In this paper we describe the procedure and evaluate the results in comparison with many well-known algorithms for the detection of precipitation. The algorithm shows a small rate of false alarms and superior detection capability; it can efficiently detect (probability of detection between 0.55 and 0.71) minimum rain rate varying from 0.14 mm h-1 (AMSU over ocean) to 0.41 (SSMIS over coast) with the remarkable result of 0.25 mm h-1 over arid land surfaces.

  15. A Region Tracking-Based Vehicle Detection Algorithm in Nighttime Traffic Scenes

    PubMed Central

    Wang, Jianqiang; Sun, Xiaoyan; Guo, Junbin

    2013-01-01

    The preceding vehicles detection technique in nighttime traffic scenes is an important part of the advanced driver assistance system (ADAS). This paper proposes a region tracking-based vehicle detection algorithm via the image processing technique. First, the brightness of the taillights during nighttime is used as the typical feature, and we use the existing global detection algorithm to detect and pair the taillights. When the vehicle is detected, a time series analysis model is introduced to predict vehicle positions and the possible region (PR) of the vehicle in the next frame. Then, the vehicle is only detected in the PR. This could reduce the detection time and avoid the false pairing between the bright spots in the PR and the bright spots out of the PR. Additionally, we present a thresholds updating method to make the thresholds adaptive. Finally, experimental studies are provided to demonstrate the application and substantiate the superiority of the proposed algorithm. The results show that the proposed algorithm can simultaneously reduce both the false negative detection rate and the false positive detection rate.

  16. Robust pupil center detection using a curvature algorithm

    NASA Technical Reports Server (NTRS)

    Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)

    1999-01-01

    Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.

  17. Polarization Lidar Liquid Cloud Detection Algorithm for Winter Mountain Storms

    NASA Technical Reports Server (NTRS)

    Sassen, Kenneth; Zhao, Hongjie

    1992-01-01

    We have collected an extensive polarization lidar dataset from elevated sites in the Tushar Mountains of Utah in support of winter storm cloud seeding research and experiments. Our truck-mounted ruby lidar collected zenith, dual-polarization lidar data through a roof window equipped with a wiper system to prevent snowfall accumulation. Lidar returns were collected at a rate of one shot every 1 to 5 min during declared storm periods over the 1985 and 1987 mid-Jan. to mid-Mar. Field seasons. The mid-barrier remote sensor field site was located at 2.57 km MSL. Of chief interest to weather modification efforts are the heights of supercooled liquid water (SLW) clouds, which must be known to assess their 'seedability' (i.e., temperature and height suitability for artificially increasing snowfall). We are currently re-examining out entire dataset to determine the climatological properties of SLW clouds in winter storms using an autonomous computer algorithm.

  18. Stride Search: a general algorithm for storm detection in high-resolution climate data

    NASA Astrophysics Data System (ADS)

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; Mundt, Miranda R.

    2016-04-01

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.

  19. A novel algorithm for automatic arrays detection in a layout

    NASA Astrophysics Data System (ADS)

    Shafee, Marwah; Park, Jea-Woo; Aslyan, Ara; Torres, Andres; Madkour, Kareem; ElManhawy, Wael

    2013-03-01

    Integrated circuits suffer from serious layout printability issues associated to the lithography manufacturing process. Regular layout designs are emerging as alternative solutions to help reducing these systematic sub-wavelength lithography variations. From CAD point of view, regular layouts can be treated as repeated patterns that are arranged in arrays. In most modern mask synthesis and verification tools, cell based hierarchical processing has been able to identify repeating cells by analyzing the design's cell placement; however, there are some routing levels which are not inside the cell and yet they create an array-like structure because of the underlying topologies which could be exploited by detecting repeated patterns in layout thus reducing simulation run-time by simulating only the representing cells and then restore all the simulation results in their corresponding arrays. The challenge is to make the array detection and restoration of the results a very lightweight operation to fully realize the benefits of the approach. A novel methodology for detecting repeated patterns in a layout is proposed. The main idea is based on translating the layout patterns into string of symbols and construct a "Symbolic Layout". By finding repetitions in the symbolic layout, repeated patterns in the drawn layout are detected. A flow for layout reduction based on arrays-detection followed by pattern-matching is discussed. Run time saving comes from doing all litho simulations on the base-patterns only. The pattern matching is then used to restore all the simulation results over the arrays. The proposed flow shows 1.4x to 2x run time enhancement over the regular litho simulation flow. An evaluation for the proposed flow in terms of coverage and run-time is drafted.

  20. Intrusion-Aware Alert Validation Algorithm for Cooperative Distributed Intrusion Detection Schemes of Wireless Sensor Networks

    PubMed Central

    Shaikh, Riaz Ahmed; Jameel, Hassan; d’Auriol, Brian J.; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2009-01-01

    Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm. PMID:22454568

  1. Development of a fire detection algorithm for the COMS (Communication Ocean and Meteorological Satellite)

    NASA Astrophysics Data System (ADS)

    Kim, Goo; Kim, Dae Sun; Lee, Yang-Won

    2013-10-01

    The forest fires do much damage to our life in ecological and economic aspects. South Korea is probably more liable to suffer from the forest fire because mountain area occupies more than half of land in South Korea. They have recently launched the COMS(Communication Ocean and Meteorological Satellite) which is a geostationary satellite. In this paper, we developed forest fire detection algorithm using COMS data. Generally, forest fire detection algorithm uses characteristics of 4 and 11 micrometer brightness temperature. Our algorithm additionally uses LST(Land Surface Temperature). We confirmed the result of our fire detection algorithm using statistical data of Korea Forest Service and ASTER(Advanced Spaceborne Thermal Emission and Reflection Radiometer) images. We used the data in South Korea On April 1 and 2, 2011 because there are small and big forest fires at that time. The detection rate was 80% in terms of the frequency of the forest fires and was 99% in terms of the damaged area. Considering the number of COMS's channels and its low resolution, this result is a remarkable outcome. To provide users with the result of our algorithm, we developed a smartphone application for users JSP(Java Server Page). This application can work regardless of the smartphone's operating system. This study can be unsuitable for other areas and days because we used just two days data. To improve the accuracy of our algorithm, we need analysis using long-term data as future work.

  2. An efficient contextual algorithm to detect subsurface fires with NOAA/AVHRR data

    SciTech Connect

    Gautam, R.S.; Singh, D.; Mittal, A.

    2008-07-15

    This paper deals with the potential application of National Oceanic and Atmospheric Administration (NOAA)/Advanced Very High Resolution Radiometer (AVHRR) data to detect subsurface fire (subsurface hotspots) by proposing an efficient contextual algorithm. Although few algorithms based on the fixed-thresholding approach have been proposed for subsurface hotspot detection, however, for each application, thresholds have to be specifically tuned to cope with unique environmental conditions. The main objective of this paper is to develop an instrument-independent adaptive method by which direct threshold or multithreshold can be avoided. The proposed contextual algorithm is helpful to monitor subsurface hotspots with operational satellite data, such as the Jharia region of India, without making any region-specific guess in thresholding. Novelty of the proposed work lies in the fact that once the algorithmic model is developed for the particular region of interest after optimizing the model parameters, there is no need to optimize those parameters again for further satellite images. Hence, the developed model can be used for optimized automated detection and monitoring of subsurface hotspots for future images of the particular region of interest. The algorithm is adaptive in nature and uses vegetation index and different NOAA/AVHRR channel's statistics to detect hotspots in the region of interest. The performance of the algorithm is assessed in terms of sensitivity and specificity and compared with other well-known thresholding, techniques such as Otsu's thresholding, entropy-based thresholding, and existing contextual algorithm proposed by Flasse and Ceccato. The proposed algorithm is found to give better hotspot detection accuracy with lesser false alarm rate.

  3. Performance Assessment Method for a Forged Fingerprint Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Shin, Yong Nyuo; Jun, In-Kyung; Kim, Hyun; Shin, Woochang

    The threat of invasion of privacy and of the illegal appropriation of information both increase with the expansion of the biometrics service environment to open systems. However, while certificates or smart cards can easily be cancelled and reissued if found to be missing, there is no way to recover the unique biometric information of an individual following a security breach. With the recognition that this threat factor may disrupt the large-scale civil service operations approaching implementation, such as electronic ID cards and e-Government systems, many agencies and vendors around the world continue to develop forged fingerprint detection technology, but no objective performance assessment method has, to date, been reported. Therefore, in this paper, we propose a methodology designed to evaluate the objective performance of the forged fingerprint detection technology that is currently attracting a great deal of attention.

  4. Fall detection algorithm in energy efficient multistate sensor system.

    PubMed

    Korats, Gundars; Hofmanis, Janis; Skorodumovs, Aleksejs; Avots, Egils

    2015-01-01

    Health issues for elderly people may lead to different injuries obtained during simple activities of daily living (ADL). Potentially the most dangerous are unintentional falls that may be critical or even lethal to some patients due to the heavy injury risk. Many fall detection systems are proposed but only recently such health care systems became available. Nevertheless sensor design, accuracy as well as energy consumption efficiency can be improved. In this paper we present a single 3-axial accelerometer energy-efficient sensor system. Power saving is achieved by selective event processing triggered by fall detection procedure. The results in our simulations show 100% accuracy when the threshold parameters are chosen correctly. Estimated energy consumption seems to extend battery life significantly. PMID:26737408

  5. Practical comparison of aberration detection algorithms for biosurveillance systems.

    PubMed

    Zhou, Hong; Burkom, Howard; Winston, Carla A; Dey, Achintya; Ajani, Umed

    2015-10-01

    National syndromic surveillance systems require optimal anomaly detection methods. For method performance comparison, we injected multi-day signals stochastically drawn from lognormal distributions into time series of aggregated daily visit counts from the U.S. Centers for Disease Control and Prevention's BioSense syndromic surveillance system. The time series corresponded to three different syndrome groups: rash, upper respiratory infection, and gastrointestinal illness. We included a sample of facilities with data reported every day and with median daily syndromic counts ⩾1 over the entire study period. We compared anomaly detection methods of five control chart adaptations, a linear regression model and a Poisson regression model. We assessed sensitivity and timeliness of these methods for detection of multi-day signals. At a daily background alert rate of 1% and 2%, the sensitivities and timeliness ranged from 24 to 77% and 3.3 to 6.1days, respectively. The overall sensitivity and timeliness increased substantially after stratification by weekday versus weekend and holiday. Adjusting the baseline syndromic count by the total number of facility visits gave consistently improved sensitivity and timeliness without stratification, but it provided better performance when combined with stratification. The daily syndrome/total-visit proportion method did not improve the performance. In general, alerting based on linear regression outperformed control chart based methods. A Poisson regression model obtained the best sensitivity in the series with high-count data. PMID:26334478

  6. Algorithms for computer detection of symmetry elements in molecular systems.

    PubMed

    Beruski, Otávio; Vidal, Luciano N

    2014-02-01

    Simple procedures for the location of proper and improper rotations and reflexion planes are presented. The search is performed with a molecule divided into subsets of symmetrically equivalent atoms (SEA) which are analyzed separately as if they were a single molecule. This approach is advantageous in many aspects. For instance, in those molecules that are symmetric rotors, the number of atoms and the inertia tensor of the SEA provide one straight way to find proper rotations of any order. The algorithms are invariant to the molecular orientation and their computational cost is low, because the main information required to find symmetry elements is interatomic distances and the principal moments of the SEA. For example, our Fortran implementation, running on a single processor, took only a few seconds to locate all 120 symmetry operations of the large and highly symmetrical fullerene C720, belonging to the Ih point group. Finally, we show how the interatomic distances matrix of a slightly unsymmetrical molecule is used to symmetrize its geometry. PMID:24403016

  7. An Optional Threshold with Svm Cloud Detection Algorithm and Dsp Implementation

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Zhou, Xiang; Yue, Tao; Liu, Yilong

    2016-06-01

    This paper presents a method which combines the traditional threshold method and SVM method, to detect the cloud of Landsat-8 images. The proposed method is implemented using DSP for real-time cloud detection. The DSP platform connects with emulator and personal computer. The threshold method is firstly utilized to obtain a coarse cloud detection result, and then the SVM classifier is used to obtain high accuracy of cloud detection. More than 200 cloudy images from Lansat-8 were experimented to test the proposed method. Comparing the proposed method with SVM method, it is demonstrated that the cloud detection accuracy of each image using the proposed algorithm is higher than those of SVM algorithm. The results of the experiment demonstrate that the implementation of the proposed method on DSP can effectively realize the real-time cloud detection accurately.

  8. Deep learning algorithms for detecting explosive hazards in ground penetrating radar data

    NASA Astrophysics Data System (ADS)

    Besaw, Lance E.; Stimac, Philip J.

    2014-05-01

    Buried explosive hazards (BEHs) have been, and continue to be, one of the most deadly threats in modern conflicts. Current handheld sensors rely on a highly trained operator for them to be effective in detecting BEHs. New algorithms are needed to reduce the burden on the operator and improve the performance of handheld BEH detectors. Traditional anomaly detection and discrimination algorithms use "hand-engineered" feature extraction techniques to characterize and classify threats. In this work we use a Deep Belief Network (DBN) to transcend the traditional approaches of BEH detection (e.g., principal component analysis and real-time novelty detection techniques). DBNs are pretrained using an unsupervised learning algorithm to generate compressed representations of unlabeled input data and form feature detectors. They are then fine-tuned using a supervised learning algorithm to form a predictive model. Using ground penetrating radar (GPR) data collected by a robotic cart swinging a handheld detector, our research demonstrates that relatively small DBNs can learn to model GPR background signals and detect BEHs with an acceptable false alarm rate (FAR). In this work, our DBNs achieved 91% probability of detection (Pd) with 1.4 false alarms per square meter when evaluated on anti-tank and anti-personnel targets at temperate and arid test sites. This research demonstrates that DBNs are a viable approach to detect and classify BEHs.

  9. Lesion detection in magnetic resonance brain images by hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Xue, Bai; Wang, Lin; Li, Hsiao-Chi; Chen, Hsian Min; Chang, Chein-I.

    2016-05-01

    Magnetic Resonance (MR) images can be considered as multispectral images so that MR imaging can be processed by multispectral imaging techniques such as maximum likelihood classification. Unfortunately, most multispectral imaging techniques are not particularly designed for target detection. On the other hand, hyperspectral imaging is primarily developed to address subpixel detection, mixed pixel classification for which multispectral imaging is generally not effective. This paper takes advantages of hyperspectral imaging techniques to develop target detection algorithms to find lesions in MR brain images. Since MR images are collected by only three image sequences, T1, T2 and PD, if a hyperspectral imaging technique is used to process MR images it suffers from the issue of insufficient dimensionality. To address this issue, two approaches to nonlinear dimensionality expansion are proposed, nonlinear correlation expansion and nonlinear band ratio expansion. Once dimensionality is expanded hyperspectral imaging algorithms are readily applied. The hyperspectral detection algorithm to be investigated for lesion detection in MR brain is the well-known subpixel target detection algorithm, called Constrained Energy Minimization (CEM). In order to demonstrate the effectiveness of proposed CEM in lesion detection, synthetic images provided by BrainWeb are used for experiments.

  10. An algorithm for power line detection and warning based on a millimeter-wave radar video.

    PubMed

    Ma, Qirong; Goshi, Darren S; Shih, Yi-Chi; Sun, Ming-Ting

    2011-12-01

    Power-line-strike accident is a major safety threat for low-flying aircrafts such as helicopters, thus an automatic warning system to power lines is highly desirable. In this paper we propose an algorithm for detecting power lines from radar videos from an active millimeter-wave sensor. Hough Transform is employed to detect candidate lines. The major challenge is that the radar videos are very noisy due to ground return. The noise points could fall on the same line which results in signal peaks after Hough Transform similar to the actual cable lines. To differentiate the cable lines from the noise lines, we train a Support Vector Machine to perform the classification. We exploit the Bragg pattern, which is due to the diffraction of electromagnetic wave on the periodic surface of power lines. We propose a set of features to represent the Bragg pattern for the classifier. We also propose a slice-processing algorithm which supports parallel processing, and improves the detection of cables in a cluttered background. Lastly, an adaptive algorithm is proposed to integrate the detection results from individual frames into a reliable video detection decision, in which temporal correlation of the cable pattern across frames is used to make the detection more robust. Extensive experiments with real-world data validated the effectiveness of our cable detection algorithm.

  11. Detection and Segmentation of Erythrocytes in Blood Smear Images Using a Line Operator and Watershed Algorithm

    PubMed Central

    Khajehpour, Hassan; Dehnavi, Alireza Mehri; Taghizad, Hossein; Khajehpour, Esmat; Naeemabadi, Mohammadreza

    2013-01-01

    Most of the erythrocyte related diseases are detectable by hematology images analysis. At the first step of this analysis, segmentation and detection of blood cells are inevitable. In this study, a novel method using a line operator and watershed algorithm is rendered for erythrocyte detection and segmentation in blood smear images, as well as reducing over-segmentation in watershed algorithm that is useful for segmentation of different types of blood cells having partial overlap. This method uses gray scale structure of blood cell, which is obtained by exertion of Euclidian distance transform on binary images. Applying this transform, the gray intensity of cell images gradually reduces from the center of cells to their margins. For detecting this intensity variation structure, a line operator measuring gray level variations along several directional line segments is applied. Line segments have maximum and minimum gray level variations has a special pattern that is applicable for detections of the central regions of cells. Intersection of these regions with the signs which are obtained by calculating of local maxima in the watershed algorithm was applied for cells’ centers detection, as well as a reduction in over-segmentation of watershed algorithm. This method creates 1300 sign in segmentation of 1274 erythrocytes available in 25 blood smear images. Accuracy and sensitivity of the proposed method are equal to 95.9% and 97.99%, respectively. The results show the proposed method's capability in detection of erythrocytes in blood smear images. PMID:24672764

  12. Application of edge detection algorithm for vision guided robotics assembly system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Jha, Panchanand; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system has a major role in making robotic assembly system autonomous. Part detection and identification of the correct part are important tasks which need to be carefully done by a vision system to initiate the process. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Edge detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus one needs to choose the correct tool for the process with respect to the given environment. In this paper the comparative study of edge detection algorithm with grasping the object in robot assembly system is presented. The proposed work is performed on the Matlab R2010a Simulink. This paper proposes four algorithms i.e. Canny's, Robert, Prewitt and Sobel edge detection algorithm. An attempt has been made to find the best algorithm for the problem. It is found that Canny's edge detection algorithm gives better result and minimum error for the intended task.

  13. Detection of Human Impacts by an Adaptive Energy-Based Anisotropic Algorithm

    PubMed Central

    Prado-Velasco, Manuel; Ortiz Marín, Rafael; del Rio Cidoncha, Gloria

    2013-01-01

    Boosted by health consequences and the cost of falls in the elderly, this work develops and tests a novel algorithm and methodology to detect human impacts that will act as triggers of a two-layer fall monitor. The two main requirements demanded by socio-healthcare providers—unobtrusiveness and reliability—defined the objectives of the research. We have demonstrated that a very agile, adaptive, and energy-based anisotropic algorithm can provide 100% sensitivity and 78% specificity, in the task of detecting impacts under demanding laboratory conditions. The algorithm works together with an unsupervised real-time learning technique that addresses the adaptive capability, and this is also presented. The work demonstrates the robustness and reliability of our new algorithm, which will be the basis of a smart falling monitor. This is shown in this work to underline the relevance of the results. PMID:24157505

  14. Combining algorithms in automatic detection of QRS complexes in ECG signals.

    PubMed

    Meyer, Carsten; Fernández Gavela, José; Harris, Matthew

    2006-07-01

    QRS complex and specifically R-Peak detection is the crucial first step in every automatic electrocardiogram analysis. Much work has been carried out in this field, using various methods ranging from filtering and threshold methods, through wavelet methods, to neural networks and others. Performance is generally good, but each method has situations where it fails. In this paper, we suggest an approach to automatically combine different QRS complex detection algorithms, here the Pan-Tompkins and wavelet algorithms, to benefit from the strengths of both methods. In particular, we introduce parameters allowing to balance the contribution of the individual algorithms; these parameters are estimated in a data-driven way. Experimental results and analysis are provided on the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia Database. We show that our combination approach outperforms both individual algorithms. PMID:16871713

  15. Fault Detection of Aircraft System with Random Forest Algorithm and Similarity Measure

    PubMed Central

    Park, Wookje; Jung, Sikhang

    2014-01-01

    Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV) that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA); RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained. PMID:25057508

  16. A linear modulation-based stochastic resonance algorithm applied to the detection of weak chromatographic peaks.

    PubMed

    Deng, Haishan; Xiang, Bingren; Liao, Xuewei; Xie, Shaofei

    2006-12-01

    A simple stochastic resonance algorithm based on linear modulation was developed to amplify and detect weak chromatographic peaks. The output chromatographic peak is often distorted when using the traditional stochastic resonance algorithm due to the presence of high levels of noise. In the new algorithm, a linear modulated double-well potential is introduced to correct for the distortion of the output peak. Method parameter selection is convenient and intuitive for linear modulation. In order to achieve a better signal-to-noise ratio for the output signal, the performance of two-layer stochastic resonance was evaluated by comparing it with wavelet-based stochastic resonance. The proposed algorithm was applied to the quantitative analysis of dimethyl sulfide and the determination of chloramphenicol residues in milk, and the good linearity of the method demonstrated that it is an effective tool for detecting weak chromatographic peaks.

  17. An Individual Tree Detection Algorithm for Dense Deciduous Forests with Spreading Branches

    NASA Astrophysics Data System (ADS)

    Shao, G.

    2015-12-01

    Individual tree information derived from LiDAR may have the potential to assist forest inventory and improve the assessment of forest structure and composition for sustainable forest management. The algorithms developed for individual tree detection are commonly focusing on finding tree tops to allocation the tree positions. However, the spreading branches (cylinder crowns) in deciduous forests cause such kind of algorithms work less effectively on dense canopy. This research applies a machine learning algorithm, mean shift, to position individual trees based on the density of LiDAR point could instead of detecting tree tops. The study site locates in a dense oak forest in Indiana, US. The selection of mean shift kernels is discussed. The constant and dynamic bandwidths of mean shit algorithms are applied and compared.

  18. Label propagation algorithm based on edge clustering coefficient for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-Kun; Tian, Xue; Li, Ya-Nan; Song, Chen

    2014-08-01

    The label propagation algorithm (LPA) is a graph-based semi-supervised learning algorithm, which can predict the information of unlabeled nodes by a few of labeled nodes. It is a community detection method in the field of complex networks. This algorithm is easy to implement with low complexity and the effect is remarkable. It is widely applied in various fields. However, the randomness of the label propagation leads to the poor robustness of the algorithm, and the classification result is unstable. This paper proposes a LPA based on edge clustering coefficient. The node in the network selects a neighbor node whose edge clustering coefficient is the highest to update the label of node rather than a random neighbor node, so that we can effectively restrain the random spread of the label. The experimental results show that the LPA based on edge clustering coefficient has made improvement in the stability and accuracy of the algorithm.

  19. Parameters for successful implant integration revisited part II: algorithm for immediate loading diagnostic factors.

    PubMed

    Bahat, Oded; Sullivan, Richard M

    2010-05-01

    Immediate loading of dental implants has become a widely reported practice with success rates ranging from 70.8% to 100%. Although most studies have considered implant survival to be the only measure of success, a better definition includes the long-term stability of the hard and soft tissues around the implant(s) and other adjacent structures, as well as the long-term stability of all the restorative components. The parameters identified in 1981 by Albrektsson and colleagues as influencing the establishment and maintenance of osseointegration have been reconsidered in relation to immediate loading to improve the chances of achieving such success. Two of the six parameters (status of the bone/implant site and implant loading conditions) have preoperative diagnostic implications, whereas three (implant design, surgical technique, and implant finish) may compensate for less-than-ideal site and loading conditions. Factors affecting the outcome of immediate loading are reviewed to assist clinicians attempting to assess its risks and benefits. PMID:20455906

  20. Target detection algorithm for airborne thermal hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, R.; Kumar, A.; Raju, P. L. N.; Krishna Murthy, Y. V. N.

    2014-11-01

    Airborne hyperspectral imaging is constantly being used for classification purpose. But airborne thermal hyperspectral image usually is a challenge for conventional classification approaches. The Telops Hyper-Cam sensor is an interferometer-based imaging system that helps in the spatial and spectral analysis of targets utilizing a single sensor. It is based on the technology of Fourier-transform which yields high spectral resolution and enables high accuracy radiometric calibration. The Hypercam instrument has 84 spectral bands in the 868 cm-1 to 1280 cm-1 region (7.8 μm to 11.5 μm), at a spectral resolution of 6 cm-1 (full-width-half-maximum) for LWIR (long wave infrared) range. Due to the Hughes effect, only a few classifiers are able to handle high dimensional classification task. MNF (Minimum Noise Fraction) rotation is a data dimensionality reducing approach to segregate noise in the data. In this, the component selection of minimum noise fraction (MNF) rotation transformation was analyzed in terms of classification accuracy using constrained energy minimization (CEM) algorithm as a classifier for Airborne thermal hyperspectral image and for the combination of airborne LWIR hyperspectral image and color digital photograph. On comparing the accuracy of all the classified images for airborne LWIR hyperspectral image and combination of Airborne LWIR hyperspectral image with colored digital photograph, it was found that accuracy was highest for MNF component equal to twenty. The accuracy increased by using the combination of airborne LWIR hyperspectral image with colored digital photograph instead of using LWIR data alone.

  1. A new approach to optic disc detection in human retinal images using the firefly algorithm.

    PubMed

    Rahebi, Javad; Hardalaç, Fırat

    2016-03-01

    There are various methods and algorithms to detect the optic discs in retinal images. In recent years, much attention has been given to the utilization of the intelligent algorithms. In this paper, we present a new automated method of optic disc detection in human retinal images using the firefly algorithm. The firefly intelligent algorithm is an emerging intelligent algorithm that was inspired by the social behavior of fireflies. The population in this algorithm includes the fireflies, each of which has a specific rate of lighting or fitness. In this method, the insects are compared two by two, and the less attractive insects can be observed to move toward the more attractive insects. Finally, one of the insects is selected as the most attractive, and this insect presents the optimum response to the problem in question. Here, we used the light intensity of the pixels of the retinal image pixels instead of firefly lightings. The movement of these insects due to local fluctuations produces different light intensity values in the images. Because the optic disc is the brightest area in the retinal images, all of the insects move toward brightest area and thus specify the location of the optic disc in the image. The results of implementation show that proposed algorithm could acquire an accuracy rate of 100 % in DRIVE dataset, 95 % in STARE dataset, and 94.38 % in DiaRetDB1 dataset. The results of implementation reveal high capability and accuracy of proposed algorithm in the detection of the optic disc from retinal images. Also, recorded required time for the detection of the optic disc in these images is 2.13 s for DRIVE dataset, 2.81 s for STARE dataset, and 3.52 s for DiaRetDB1 dataset accordingly. These time values are average value.

  2. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data.

    PubMed

    Goldstein, Markus; Uchida, Seiichi

    2016-01-01

    Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.

  3. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data

    PubMed Central

    Goldstein, Markus; Uchida, Seiichi

    2016-01-01

    Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. PMID:27093601

  4. Simple Common Plane contact detection algorithm for FE/FD methods

    SciTech Connect

    Vorobiev, O

    2006-07-19

    Common-plane (CP) algorithm is widely used in Discrete Element Method (DEM) to model contact forces between interacting particles or blocks. A new simple contact detection algorithm is proposed to model contacts in FE/FD methods which is similar to the CP algorithm. The CP is defined as a plane separating interacting faces of FE/FD mesh instead of blocks or particles in the original CP method. The method does not require iterations. It is very robust and easy to implement both in 2D and 3D case.

  5. Combining the genetic algorithm and successive projection algorithm for the selection of feature wavelengths to evaluate exudative characteristics in frozen-thawed fish muscle.

    PubMed

    Cheng, Jun-Hu; Sun, Da-Wen; Pu, Hongbin

    2016-04-15

    The potential use of feature wavelengths for predicting drip loss in grass carp fish, as affected by being frozen at -20°C for 24 h and thawed at 4°C for 1, 2, 4, and 6 days, was investigated. Hyperspectral images of frozen-thawed fish were obtained and their corresponding spectra were extracted. Least-squares support vector machine and multiple linear regression (MLR) models were established using five key wavelengths, selected by combining a genetic algorithm and successive projections algorithm, and this showed satisfactory performance in drip loss prediction. The MLR model with a determination coefficient of prediction (R(2)P) of 0.9258, and lower root mean square error estimated by a prediction (RMSEP) of 1.12%, was applied to transfer each pixel of the image and generate the distribution maps of exudation changes. The results confirmed that it is feasible to identify the feature wavelengths using variable selection methods and chemometric analysis for developing on-line multispectral imaging.

  6. Combining the genetic algorithm and successive projection algorithm for the selection of feature wavelengths to evaluate exudative characteristics in frozen-thawed fish muscle.

    PubMed

    Cheng, Jun-Hu; Sun, Da-Wen; Pu, Hongbin

    2016-04-15

    The potential use of feature wavelengths for predicting drip loss in grass carp fish, as affected by being frozen at -20°C for 24 h and thawed at 4°C for 1, 2, 4, and 6 days, was investigated. Hyperspectral images of frozen-thawed fish were obtained and their corresponding spectra were extracted. Least-squares support vector machine and multiple linear regression (MLR) models were established using five key wavelengths, selected by combining a genetic algorithm and successive projections algorithm, and this showed satisfactory performance in drip loss prediction. The MLR model with a determination coefficient of prediction (R(2)P) of 0.9258, and lower root mean square error estimated by a prediction (RMSEP) of 1.12%, was applied to transfer each pixel of the image and generate the distribution maps of exudation changes. The results confirmed that it is feasible to identify the feature wavelengths using variable selection methods and chemometric analysis for developing on-line multispectral imaging. PMID:26617027

  7. Infrared small target detection based on bilateral filtering algorithm with similarity judgments

    NASA Astrophysics Data System (ADS)

    Li, Yanbei; Li, Yan

    2014-11-01

    Infrared small target detection is part of the key technologies in infrared precision-guided, search and track system. Resulting from the relative distance of the infrared image system and the target is far, the target becomes small, faint and obscure. Furthermore, the interference of background clutter and system noise is intense. To solve the problem of infrared small target detection in a complex background, this paper proposes a bilateral filtering algorithm based on similarity judgments for infrared image background prediction. The algorithm introduces gradient factor and similarity judgment factor into traditional bilateral filtering. The two factors can enhance the accuracy of the algorithm for smooth region. At the same time, spatial proximity coefficients and gray similarity coefficient in the bilateral filtering are all expressed by the first two of McLaughlin expansion, which aiming at reducing the time overhead. Simulation results show that the proposed algorithm can effectively suppress complex background clutter in the infrared image and enhance target signal compared with the improved bilateral filtering algorithm, and it also can improve the signal to noise ratio (SNR) and contrast. Besides, this algorithm can reduce the computation time. In a word, this algorithm has a good background rejection performance.

  8. A High-Order Statistical Tensor Based Algorithm for Anomaly Detection in Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Geng, Xiurui; Sun, Kang; Ji, Luyan; Zhao, Yongchao

    2014-11-01

    Recently, high-order statistics have received more and more interest in the field of hyperspectral anomaly detection. However, most of the existing high-order statistics based anomaly detection methods require stepwise iterations since they are the direct applications of blind source separation. Moreover, these methods usually produce multiple detection maps rather than a single anomaly distribution image. In this study, we exploit the concept of coskewness tensor and propose a new anomaly detection method, which is called COSD (coskewness detector). COSD does not need iteration and can produce single detection map. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm.

  9. A high-order statistical tensor based algorithm for anomaly detection in hyperspectral imagery.

    PubMed

    Geng, Xiurui; Sun, Kang; Ji, Luyan; Zhao, Yongchao

    2014-01-01

    Recently, high-order statistics have received more and more interest in the field of hyperspectral anomaly detection. However, most of the existing high-order statistics based anomaly detection methods require stepwise iterations since they are the direct applications of blind source separation. Moreover, these methods usually produce multiple detection maps rather than a single anomaly distribution image. In this study, we exploit the concept of coskewness tensor and propose a new anomaly detection method, which is called COSD (coskewness detector). COSD does not need iteration and can produce single detection map. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm. PMID:25366706

  10. Structured learning algorithm for detection of nonobstructive and obstructive coronary plaque lesions from computed tomography angiography

    PubMed Central

    Kang, Dongwoo; Dey, Damini; Slomka, Piotr J.; Arsanjani, Reza; Nakazato, Ryo; Ko, Hyunsuk; Berman, Daniel S.; Li, Debiao; Kuo, C.-C. Jay

    2015-01-01

    Abstract. Visual identification of coronary arterial lesion from three-dimensional coronary computed tomography angiography (CTA) remains challenging. We aimed to develop a robust automated algorithm for computer detection of coronary artery lesions by machine learning techniques. A structured learning technique is proposed to detect all coronary arterial lesions with stenosis ≥25%. Our algorithm consists of two stages: (1) two independent base decisions indicating the existence of lesions in each arterial segment and (b) the final decision made by combining the base decisions. One of the base decisions is the support vector machine (SVM) based learning algorithm, which divides each artery into small volume patches and integrates several quantitative geometric and shape features for arterial lesions in each small volume patch by SVM algorithm. The other base decision is the formula-based analytic method. The final decision in the first stage applies SVM-based decision fusion to combine the two base decisions in the second stage. The proposed algorithm was applied to 42 CTA patient datasets, acquired with dual-source CT, where 21 datasets had 45 lesions with stenosis ≥25%. Visual identification of lesions with stenosis ≥25% by three expert readers, using consensus reading, was considered as a reference standard. Our method performed with high sensitivity (93%), specificity (95%), and accuracy (94%), with receiver operator characteristic area under the curve of 0.94. The proposed algorithm shows promising results in the automated detection of obstructive and nonobstructive lesions from CTA. PMID:26158081

  11. A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series

    SciTech Connect

    Chandola, Varun; Vatsavai, Raju

    2011-01-01

    Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

  12. Novel Ultrasound Sensor and Reconstruction Algorithm for Breast Cancer Detection

    SciTech Connect

    Kallman, J S; Ashby, A E; Ciarlo, D R; Thomas, G H

    2002-09-09

    Mammography is currently used for screening women over the age of 40 for breast cancer. It has not been used routinely on younger women because their breast composition is mostly glandular, or radiodense, meaning there is an increased radiation exposure risk as well as a high likelihood of poor image quality. For these younger women, it is calculated that the radiation exposure risk is higher than the potential benefit from the screening. It is anticipated that transmission ultrasound will enable screening of much younger women and complement mamographic screening in women over 40. Ultrasonic transmission tomography holds out the hope of being a discriminating tool for breast cancer screening that is safe, comfortable, and inexpensive. From its inception, however, this imaging modality has been plagued by the problem of how to quickly and inexpensively obtain the data necessary for the tomographic reconstruction. The objectives of this project were: to adapt a new kind of sensor to data acquisition for ultrasonic transmission tomography of the breast, to collect phantom data, to devise new reconstruction algorithms to use that data, and to recommend improved methods for displaying the reconstructions. The ultrasound sensor images an acoustic pressure wave over an entire surface by converting sound pressure into an optical modulation. At the beginning of this project the sensor imaged an area of approximately 7mm by 7mm and was very fragile. During the first year of this research we improved the production and assembly process of the sensors so they now last indefinitely. Our goal for the second year was to enlarge the sensor aperture. Due to unavailability of high quality materials, we were not able to enlarge our original design. We created a phantom of materials similar to those used in manufacturing breast phantoms. We used the sensors to collect data from this phantom. We used both established (diffraction tomography) and new (paraxial adjoint method tomography

  13. A PD control-based QRS detection algorithm for wearable ECG applications.

    PubMed

    Choi, Changmok; Kim, Younho; Shin, Kunsoo

    2012-01-01

    We present a QRS detection algorithm for wearable ECG applications using a proportional-derivative (PD) control. ECG data of arrhythmia have irregular intervals and magnitudes of QRS waves that impede correct QRS detection. To resolve the problem, PD control is applied to avoid missing a small QRS wave followed from a large QRS wave and to avoid falsely detecting noise as QRS waves when an interval between two adjacent QRS waves is large (e.g. bradycardia, pause, and arioventricular block). ECG data was obtained from 78 patients with various cardiovascular diseases and tested for the performance evaluation of the proposed algorithm. The overall sensitivity and positive predictive value were 99.28% and 99.26%, respectively. The proposed algorithm has low computational complexity, so that it can be suitable to apply mobile ECG monitoring system in real time. PMID:23367208

  14. Rain detection and removal algorithm using motion-compensated non-local mean filter

    NASA Astrophysics Data System (ADS)

    Song, B. C.; Seo, S. J.

    2015-03-01

    This paper proposed a novel rain detection and removal algorithm robust against camera motions. It is very difficult to detect and remove rain in video with camera motion. So, most previous works assume that camera is fixed. However, these methods are not useful for application. The proposed algorithm initially detects possible rain streaks by using spatial properties such as luminance and structure of rain streaks. Then, the rain streak candidates are selected based on Gaussian distribution model. Next, a non-rain block matching algorithm is performed between adjacent frames to find similar blocks to each including rain pixels. If the similar blocks to the block are obtained, the rain region of the block is reconstructed by non-local mean (NLM) filtering using the similar neighbors. Experimental results show that the proposed method outperforms previous works in terms of objective and subjective visual quality.

  15. Ear feature region detection based on a combined image segmentation algorithm-KRM

    NASA Astrophysics Data System (ADS)

    Jiang, Jingying; Zhang, Hao; Zhang, Qi; Lu, Junsheng; Ma, Zhenhe; Xu, Kexin

    2014-02-01

    Scale Invariant Feature Transform SIFT algorithm is widely used for ear feature matching and recognition. However, the application of the algorithm is usually interfered by the non-target areas within the whole image, and the interference would then affect the matching and recognition of ear features. To solve this problem, a combined image segmentation algorithm i.e. KRM was introduced in this paper, As the human ear recognition pretreatment method. Firstly, the target areas of ears were extracted by the KRM algorithm and then SIFT algorithm could be applied to the detection and matching of features. The present KRM algorithm follows three steps: (1)the image was preliminarily segmented into foreground target area and background area by using K-means clustering algorithm; (2)Region growing method was used to merge the over-segmented areas; (3)Morphology erosion filtering method was applied to obtain the final segmented regions. The experiment results showed that the KRM method could effectively improve the accuracy and robustness of ear feature matching and recognition based on SIFT algorithm.

  16. Creating a Successful Citizen Science Model to Detect and Report Invasive Species

    ERIC Educational Resources Information Center

    Gallo, Travis; Waitt, Damon

    2011-01-01

    The Invaders of Texas program is a successful citizen science program in which volunteers survey and monitor invasive plants throughout Texas. Invasive plants are being introduced at alarming rates, and our limited knowledge about their distribution is a major cause for concern. The Invaders of Texas program trains citizen scientists to detect the…

  17. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    SciTech Connect

    Xu, Songhua; Krauthammer, Prof. Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use.

  18. Peak detection in fiber Bragg grating using a fast phase correlation algorithm

    NASA Astrophysics Data System (ADS)

    Lamberti, A.; Vanlanduit, S.; De Pauw, B.; Berghmans, F.

    2014-05-01

    Fiber Bragg grating sensing principle is based on the exact tracking of the peak wavelength location. Several peak detection techniques have already been proposed in literature. Among these, conventional peak detection (CPD) methods such as the maximum detection algorithm (MDA), do not achieve very high precision and accuracy, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. On the other hand, recently proposed algorithms, like the cross-correlation demodulation algorithm (CCA), are more precise and accurate but require higher computational effort. To overcome these limitations, we developed a novel fast phase correlation algorithm (FPC) which performs as well as the CCA, being at the same time considerably faster. This paper presents the FPC technique and analyzes its performances for different SNR and wavelength resolutions. Using simulations and experiments, we compared the FPC with the MDA and CCA algorithms. The FPC detection capabilities were as precise and accurate as those of the CCA and considerably better than those of the CPD. The FPC computational time was up to 50 times lower than CCA, making the FPC a valid candidate for future implementation in real-time systems.

  19. An ant colony based algorithm for overlapping community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Yanheng; Zhang, Jindong; Liu, Tuming; Zhang, Di

    2015-06-01

    Community detection is of great importance to understand the structures and functions of networks. Overlap is a significant feature of networks and overlapping community detection has attracted an increasing attention. Many algorithms have been presented to detect overlapping communities. In this paper, we present an ant colony based overlapping community detection algorithm which mainly includes ants' location initialization, ants' movement and post processing phases. An ants' location initialization strategy is designed to identify initial location of ants and initialize label list stored in each node. During the ants' movement phase, the entire ants move according to the transition probability matrix, and a new heuristic information computation approach is redefined to measure similarity between two nodes. Every node keeps a label list through the cooperation made by ants until a termination criterion is reached. A post processing phase is executed on the label list to get final overlapping community structure naturally. We illustrate the capability of our algorithm by making experiments on both synthetic networks and real world networks. The results demonstrate that our algorithm will have better performance in finding overlapping communities and overlapping nodes in synthetic datasets and real world datasets comparing with state-of-the-art algorithms.

  20. Integration of a Self-Coherence Algorithm into DISAT for Forced Oscillation Detection

    SciTech Connect

    Follum, James D.; Tuffner, Francis K.; Amidan, Brett G.

    2015-03-03

    With the increasing number of phasor measurement units on the power system, behaviors typically not observable on the power system are becoming more apparent. Oscillatory behavior on the power system, notably forced oscillations, are one such behavior. However, the large amounts of data coming from the PMUs makes manually detecting and locating these oscillations difficult. To automate portions of the process, an oscillation detection routine was coded into the Data Integrity and Situational Awareness Tool (DISAT) framework. Integration into the DISAT framework allows forced oscillations to be detected and information about the event provided to operational engineers. The oscillation detection algorithm integrates with the data handling and atypical data detecting capabilities of DISAT, building off of a standard library of functions. This report details that integration with information on the algorithm, some implementation issues, and some sample results from the western United States’ power grid.

  1. A generalized power-law detection algorithm for humpback whale vocalizations.

    PubMed

    Helble, Tyler A; Ierley, Glenn R; D'Spain, Gerald L; Roch, Marie A; Hildebrand, John A

    2012-04-01

    Conventional detection of humpback vocalizations is often based on frequency summation of band-limited spectrograms under the assumption that energy (square of the Fourier amplitude) is the appropriate metric. Power-law detectors allow for a higher power of the Fourier amplitude, appropriate when the signal occupies a limited but unknown subset of these frequencies. Shipping noise is non-stationary and colored and problematic for many marine mammal detection algorithms. Modifications to the standard power-law form are introduced to minimize the effects of this noise. These same modifications also allow for a fixed detection threshold, applicable to broadly varying ocean acoustic environments. The detection algorithm is general enough to detect all types of humpback vocalizations. Tests presented in this paper show this algorithm matches human detection performance with an acceptably small probability of false alarms (P(FA) < 6%) for even the noisiest environments. The detector outperforms energy detection techniques, providing a probability of detection P(D) = 95% for P(FA) < 5% for three acoustic deployments, compared to P(FA) > 40% for two energy-based techniques. The generalized power-law detector also can be used for basic parameter estimation and can be adapted for other types of transient sounds.

  2. Combining contour detection algorithms for the automatic extraction of the preparation line from a dental 3D measurement

    NASA Astrophysics Data System (ADS)

    Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut

    2005-04-01

    Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.

  3. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection

    PubMed Central

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Singh, Kh. Manglem; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500

  4. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.

    PubMed

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500

  5. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.

    PubMed

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter.

  6. Detection of Carious Lesions and Restorations Using Particle Swarm Optimization Algorithm

    PubMed Central

    Naebi, Mohammad; Saberi, Eshaghali; Risbaf Fakour, Sirous; Naebi, Ahmad; Hosseini Tabatabaei, Somayeh; Ansari Moghadam, Somayeh; Bozorgmehr, Elham; Davtalab Behnam, Nasim; Azimi, Hamidreza

    2016-01-01

    Background/Purpose. In terms of the detection of tooth diagnosis, no intelligent detection has been done up till now. Dentists just look at images and then they can detect the diagnosis position in tooth based on their experiences. Using new technologies, scientists will implement detection and repair of tooth diagnosis intelligently. In this paper, we have introduced one intelligent method for detection using particle swarm optimization (PSO) and our mathematical formulation. This method was applied to 2D special images. Using developing of our method, we can detect tooth diagnosis for all of 2D and 3D images. Materials and Methods. In recent years, it is possible to implement intelligent processing of images by high efficiency optimization algorithms in many applications especially for detection of dental caries and restoration without human intervention. In the present work, we explain PSO algorithm with our detection formula for detection of dental caries and restoration. Also image processing helped us to implement our method. And to do so, pictures taken by digital radiography systems of tooth are used. Results and Conclusion. We implement some mathematics formula for fitness of PSO. Our results show that this method can detect dental caries and restoration in digital radiography pictures with the good convergence. In fact, the error rate of this method was 8%, so that it can be implemented for detection of dental caries and restoration. Using some parameters, it is possible that the error rate can be even reduced below 0.5%. PMID:27212947

  7. Detection of Carious Lesions and Restorations Using Particle Swarm Optimization Algorithm.

    PubMed

    Naebi, Mohammad; Saberi, Eshaghali; Risbaf Fakour, Sirous; Naebi, Ahmad; Hosseini Tabatabaei, Somayeh; Ansari Moghadam, Somayeh; Bozorgmehr, Elham; Davtalab Behnam, Nasim; Azimi, Hamidreza

    2016-01-01

    Background/Purpose. In terms of the detection of tooth diagnosis, no intelligent detection has been done up till now. Dentists just look at images and then they can detect the diagnosis position in tooth based on their experiences. Using new technologies, scientists will implement detection and repair of tooth diagnosis intelligently. In this paper, we have introduced one intelligent method for detection using particle swarm optimization (PSO) and our mathematical formulation. This method was applied to 2D special images. Using developing of our method, we can detect tooth diagnosis for all of 2D and 3D images. Materials and Methods. In recent years, it is possible to implement intelligent processing of images by high efficiency optimization algorithms in many applications especially for detection of dental caries and restoration without human intervention. In the present work, we explain PSO algorithm with our detection formula for detection of dental caries and restoration. Also image processing helped us to implement our method. And to do so, pictures taken by digital radiography systems of tooth are used. Results and Conclusion. We implement some mathematics formula for fitness of PSO. Our results show that this method can detect dental caries and restoration in digital radiography pictures with the good convergence. In fact, the error rate of this method was 8%, so that it can be implemented for detection of dental caries and restoration. Using some parameters, it is possible that the error rate can be even reduced below 0.5%. PMID:27212947

  8. Molecular scatology: how to improve prey DNA detection success in avian faeces?

    PubMed

    Oehm, Johannes; Juen, Anita; Nagiller, Karin; Neuhauser, Sigrid; Traugott, Michael

    2011-07-01

    The analysis of prey DNA in faeces is a non-invasive approach to examine the diet of birds. However, it is poorly known how gut transition time, environmental factors and laboratory treatments such as storage conditions or DNA extraction procedures affect the detection success of prey DNA. Here, we examined several of these factors using faeces from carrion crows fed with insect larvae. Faeces produced between 30 min and 4 h post-feeding tested positive for insect DNA, representing the gut transition time. Prey detection was not only possible in fresh but also in 5-day-old faeces. The type of surface the faeces were placed on for these 5 days, however, affected prey DNA detection success: samples placed on soil provided the lowest rate of positives compared to faeces left on leaves, on branches and within plastic tubes. Exposing faeces to sunlight and rain significantly lowered prey DNA detection rates (17% and 68% positives in exposed and protected samples, respectively). Storing faeces in ethanol or in the freezer did not affect molecular prey detection. Extracting DNA directly from larger pieces of faecal pellets resulted in significantly higher prey detection rates than when using small amounts of homogenized faeces. A cetyltrimethyl ammonium bromide-based DNA extraction protocol yielded significantly higher DNA detection rates (60%) than three commercial kits, however, for small amounts of homogenized faeces only. Our results suggest that collecting faeces from smooth, clean and non-absorbing surfaces, protected from sunlight and rain, improves DNA detection success in avian faeces. PMID:21676193

  9. Automated shock detection and analysis algorithm for space weather application

    NASA Astrophysics Data System (ADS)

    Vorotnikov, Vasiliy S.; Smith, Charles W.; Hu, Qiang; Szabo, Adam; Skoug, Ruth M.; Cohen, Christina M. S.

    2008-03-01

    Space weather applications have grown steadily as real-time data have become increasingly available. Numerous industrial applications have arisen with safeguarding of the power distribution grids being a particular interest. NASA uses short-term and long-term space weather predictions in its launch facilities. Researchers studying ionospheric, auroral, and magnetospheric disturbances use real-time space weather services to determine launch times. Commercial airlines, communication companies, and the military use space weather measurements to manage their resources and activities. As the effects of solar transients upon the Earth's environment and society grow with the increasing complexity of technology, better tools are needed to monitor and evaluate the characteristics of the incoming disturbances. A need is for automated shock detection and analysis methods that are applicable to in situ measurements upstream of the Earth. Such tools can provide advance warning of approaching disturbances that have significant space weather impacts. Knowledge of the shock strength and speed can also provide insight into the nature of the approaching solar transient prior to arrival at the magnetopause. We report on efforts to develop a tool that can find and analyze shocks in interplanetary plasma data without operator intervention. This method will run with sufficient speed to be a practical space weather tool providing useful shock information within 1 min of having the necessary data to ground. The ability to run without human intervention frees space weather operators to perform other vital services. We describe ways of handling upstream data that minimize the frequency of false positive alerts while providing the most complete description of approaching disturbances that is reasonably possible.

  10. Low power multi-camera system and algorithms for automated threat detection

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  11. Evaluation of the GPU architecture for the implementation of target detection algorithms for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Trigueros-Espinosa, Blas; Vélez-Reyes, Miguel; Santiago-Santiago, Nayda G.; Rosario-Torres, Samuel

    2011-06-01

    Hyperspectral sensors can collect hundreds of images taken at different narrow and contiguously spaced spectral bands. This high-resolution spectral information can be used to identify materials and objects within the field of view of the sensor by their spectral signature, but this process may be computationally intensive due to the large data sizes generated by the hyperspectral sensors, typically hundreds of megabytes. This can be an important limitation for some applications where the detection process must be performed in real time (surveillance, explosive detection, etc.). In this work, we developed a parallel implementation of three state-ofthe- art target detection algorithms (RX algorithm, matched filter and adaptive matched subspace detector) using a graphics processing unit (GPU) based on the NVIDIA® CUDA™ architecture. In addition, a multi-core CPUbased implementation of each algorithm was developed to be used as a baseline for the speedups estimation. We evaluated the performance of the GPU-based implementations using an NVIDIA ® Tesla® C1060 GPU card, and the detection accuracy of the implemented algorithms was evaluated using a set of phantom images simulating traces of different materials on clothing. We achieved a maximum speedup in the GPU implementations of around 20x over a multicore CPU-based implementation, which suggests that applications for real-time detection of targets in HSI can greatly benefit from the performance of GPUs as processing hardware.

  12. QRS detection using K-Nearest Neighbor algorithm (KNN) and evaluation on standard ECG databases.

    PubMed

    Saini, Indu; Singh, Dilbag; Khosla, Arun

    2013-07-01

    The performance of computer aided ECG analysis depends on the precise and accurate delineation of QRS-complexes. This paper presents an application of K-Nearest Neighbor (KNN) algorithm as a classifier for detection of QRS-complex in ECG. The proposed algorithm is evaluated on two manually annotated standard databases such as CSE and MIT-BIH Arrhythmia database. In this work, a digital band-pass filter is used to reduce false detection caused by interference present in ECG signal and further gradient of the signal is used as a feature for QRS-detection. In addition the accuracy of KNN based classifier is largely dependent on the value of K and type of distance metric. The value of K = 3 and Euclidean distance metric has been proposed for the KNN classifier, using fivefold cross-validation. The detection rates of 99.89% and 99.81% are achieved for CSE and MIT-BIH databases respectively. The QRS detector obtained a sensitivity Se = 99.86% and specificity Sp = 99.86% for CSE database, and Se = 99.81% and Sp = 99.86% for MIT-BIH Arrhythmia database. A comparison is also made between proposed algorithm and other published work using CSE and MIT-BIH Arrhythmia databases. These results clearly establishes KNN algorithm for reliable and accurate QRS-detection.

  13. An improved algorithm for polar cloud-base detection by ceilometer over the ice sheets

    NASA Astrophysics Data System (ADS)

    Van Tricht, K.; Gorodetskaya, I. V.; Lhermitte, S.; Turner, D. D.; Schween, J. H.; Van Lipzig, N. P. M.

    2014-05-01

    Optically thin ice and mixed-phase clouds play an important role in polar regions due to their effect on cloud radiative impact and precipitation. Cloud-base heights can be detected by ceilometers, low-power backscatter lidars that run continuously and therefore have the potential to provide basic cloud statistics including cloud frequency, base height and vertical structure. The standard cloud-base detection algorithms of ceilometers are designed to detect optically thick liquid-containing clouds, while the detection of thin ice clouds requires an alternative approach. This paper presents the polar threshold (PT) algorithm that was developed to be sensitive to optically thin hydrometeor layers (minimum optical depth τ ≥ 0.01). The PT algorithm detects the first hydrometeor layer in a vertical attenuated backscatter profile exceeding a predefined threshold in combination with noise reduction and averaging procedures. The optimal backscatter threshold of 3 × 10-4 km-1 sr-1 for cloud-base detection near the surface was derived based on a sensitivity analysis using data from Princess Elisabeth, Antarctica and Summit, Greenland. At higher altitudes where the average noise level is higher than the backscatter threshold, the PT algorithm becomes signal-to-noise ratio driven. The algorithm defines cloudy conditions as any atmospheric profile containing a hydrometeor layer at least 90 m thick. A comparison with relative humidity measurements from radiosondes at Summit illustrates the algorithm's ability to significantly discriminate between clear-sky and cloudy conditions. Analysis of the cloud statistics derived from the PT algorithm indicates a year-round monthly mean cloud cover fraction of 72% (±10%) at Summit without a seasonal cycle. The occurrence of optically thick layers, indicating the presence of supercooled liquid water droplets, shows a seasonal cycle at Summit with a monthly mean summer peak of 40 % (±4%). The monthly mean cloud occurrence frequency

  14. Detection algorithm for glass bottle mouth defect by continuous wavelet transform based on machine vision

    NASA Astrophysics Data System (ADS)

    Qian, Jinfang; Zhang, Changjiang

    2014-11-01

    An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.

  15. Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm.

    PubMed

    Ffrench, P A; Zeidler, J H; Ku, W H

    1997-01-01

    Two-dimensional (2-D) adaptive filtering is a technique that can be applied to many image processing applications. This paper will focus on the development of an improved 2-D adaptive lattice algorithm (2-D AL) and its application to the removal of correlated clutter to enhance the detectability of small objects in images. The two improvements proposed here are increased flexibility in the calculation of the reflection coefficients and a 2-D method to update the correlations used in the 2-D AL algorithm. The 2-D AL algorithm is shown to predict correlated clutter in image data and the resulting filter is compared with an ideal Wiener-Hopf filter. The results of the clutter removal will be compared to previously published ones for a 2-D least mean square (LMS) algorithm. 2-D AL is better able to predict spatially varying clutter than the 2-D LMS algorithm, since it converges faster to new image properties. Examples of these improvements are shown for a spatially varying 2-D sinusoid in white noise and simulated clouds. The 2-D LMS and 2-D AL algorithms are also shown to enhance a mammogram image for the detection of small microcalcifications and stellate lesions.

  16. A study of algorithm to detect wildfire with edge of smoke plumes

    NASA Astrophysics Data System (ADS)

    Mototani, I.; Kimura, K.; Honma, T.

    2008-12-01

    Recent years, huge wildfires occur in many part of the world. And some researches have proceeded to improve wildfire detection with satellite imagery. Dozier (1981) developed the method that detects hotspot pixel by comparing the pixel with adjacent pixels. After that, Threshold method based on Dozier's approach and Contextual Method using relationship among neighbor pixels were appeared. But each of these algorithms needs more improvement in accuracy. In this study, we formulate a new algorithm with the edges of smoke plumes based on the rule of fire pixels match the origin of smoke plumes, and validate with the truth data. In this algorithm, MODIS band 1 (visible red) is extracted and smoke plumes are accented by histogram stretching. The edges of smoke plumes are extracted. Edge pixels that consist of fire smoke plumes are approximated by least squares method. Finally, the origins of the smoke plumes are determined and fire pixels are detected by the threshold approach. Our method, however, contain a problem that hotspot area shapes often a rectangle under the condition of not so high threshold temperature. In the results of this algorithm applied, it is found that it is easy to detect fire when clouds are not so thick and when smoke shape is visible clearly. On the other hand, false alarms along are detected along coast line and at the high refraction areas on a glacier, cirrocumulus clouds and so on. In addition, excessive detections increase in the low latitude because brightness temperature is raised by sunlight reflection. The wildfires in Alaska were detected well with our method. To validate this result, it is compared with the observational data and the common detection method. The Alaska Fire History Data (AFHD) is observed by Alaska Fire Service frequently, and the AFHD is offered as GIS data. On the other hand, MOD14 is one of the most famous and common methods to detect wildfire. It is calculated easily by MODIS data. Its accuracy rate to detect fire

  17. A new method for mesoscale eddy detection based on watershed segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Lijuan; Dong, Qing; Xue, Cunjin; Hou, Xueyan; Song, Wanjiao

    2014-11-01

    Mesoscale eddies are widely found in the ocean. They play important roles in heat transport, momentum transport, ocean circulation and so on. The automatic detection of mesoscale eddies based on satellite remote sensing images is an important research topic. Some image processing methods have been applied to identify mesoscale eddies such as Canny operator, Hough transform and so forth, but the accuracy of detection was not very ideal. This paper described a new algorithm based on watershed segmentation algorithm for automatic detection of mesoscale eddies from sea level anomaly(SLA) image. Watershed segmentation algorithm has the disadvantage of over-segmentation. It is important to select appropriate markers. In this study, markers were selected from the reconstructed SLA image, which were used to modify the gradient image. Then two parameters, radius and amplitude of eddy, were used to filter the segmentation results. The method was tested on the Northwest Pacific using TOPEX/Poseidon altimeter data. The results are encouraging, showing that this algorithm is applicable for mesoscale eddies and has a good accuracy. This algorithm has a good response to weak edges and extracted eddies have complete and continuous boundaries. The eddy boundaries generally coincide with closed contours of SSH.

  18. Machine learning algorithms for the prediction of conception success to a given insemination in lactating dairy cows.

    PubMed

    Hempstalk, K; McParland, S; Berry, D P

    2015-08-01

    The ability to accurately predict the conception outcome for a future mating would be of considerable benefit for producers in deciding what mating plan (i.e., expensive semen or less expensive semen) to implement for a given cow. The objective of the present study was to use herd- and cow-level factors to predict the likelihood of conception success to a given insemination (i.e., conception outcome not including embryo loss); of particular interest in the present study was the usefulness of milk mid-infrared (MIR) spectral data in augmenting the accuracy of the prediction model. A total of 4,341 insemination records with conception outcome information from 2,874 lactations on 1,789 cows from 7 research herds for the years 2009 to 2014 were available. The data set was separated into a calibration data set and a validation data set using either of 2 approaches: (1) the calibration data set contained records from all 7 farms for the years 2009 to 2011, inclusive, and the validation data set included data from the 7 farms for the years 2012 to 2014, inclusive, or (2) the calibration data set contained records from 5 farms for all 6 yr and the validation data set contained information from the other 2 farms for all 6 yr. The prediction models were developed with 8 different machine learning algorithms in the calibration data set using standard 10-times 10-fold cross-validation and also by evaluating in the validation data set. The area under curve (AUC) for the receiver operating curve varied from 0.487 to 0.675 across the different algorithms and scenarios investigated. Logistic regression was generally the best-performing algorithm. The AUC was generally inferior for the external validation data sets compared with the calibration data sets. The inclusion of milk MIR in the prediction model generally did not improve the accuracy of prediction. Despite the fair AUC for predicting conception outcome under the different scenarios investigated, the model provided a

  19. [A cloud detection algorithm for MODIS images combining Kmeans clustering and multi-spectral threshold method].

    PubMed

    Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei

    2011-04-01

    An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.

  20. Automatic face detection and tracking based on Adaboost with camshift algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Long, JianFeng

    2011-10-01

    With the development of information technology, video surveillance is widely used in security monitoring and identity recognition. For most of pure face tracking algorithms are hard to specify the initial location and scale of face automatically, this paper proposes a fast and robust method to detect and track face by combining adaboost with camshift algorithm. At first, the location and scale of face is specified by adaboost algorithm based on Haar-like features and it will be conveyed to the initial search window automatically. Then, we apply camshift algorithm to track face. The experimental results based on OpenCV software yield good results, even in some special circumstances, such as light changing and face rapid movement. Besides, by drawing out the tracking trajectory of face movement, some abnormal behavior events can be analyzed.

  1. A hybrid algorithm for multiple change-point detection in continuous measurements

    NASA Astrophysics Data System (ADS)

    Priyadarshana, W. J. R. M.; Polushina, T.; Sofronov, G.

    2013-10-01

    Array comparative genomic hybridization (aCGH) is one of the techniques that can be used to detect copy number variations in DNA sequences. It has been identified that abrupt changes in the human genome play a vital role in the progression and development of many diseases. We propose a hybrid algorithm that utilizes both the sequential techniques and the Cross-Entropy method to estimate the number of change points as well as their locations in aCGH data. We applied the proposed hybrid algorithm to both artificially generated data and real data to illustrate the usefulness of the methodology. Our results show that the proposed algorithm is an effective method to detect multiple change-points in continuous measurements.

  2. Minimal time change detection algorithm for reconfigurable control system and application to aerospace

    NASA Technical Reports Server (NTRS)

    Kim, Sungwan

    1994-01-01

    System parameters should be tracked on-line to build a reconfigurable control system even though there exists an abrupt change. For this purpose, a new performance index that we are studying is the speed of adaptation- how quickly does the system determine that a change has occurred? In this paper, a new, robust algorithm that is optimized to minimize the time delay in detecting a change for fixed false alarm probability is proposed. Simulation results for the aircraft lateral motion with a known or unknown change in control gain matrices, in the presence of doublet input, indicate that the algorithm works fairly well. One of its distinguishing properties is that detection delay of this algorithm is superior to that of Whiteness Test.

  3. Effective Echo Detection and Accurate Orbit Estimation Algorithms for Space Debris Radar

    NASA Astrophysics Data System (ADS)

    Isoda, Kentaro; Sakamoto, Takuya; Sato, Toru

    Orbit estimation of space debris, objects of no inherent value orbiting the earth, is a task that is important for avoiding collisions with spacecraft. The Kamisaibara Spaceguard Center radar system was built in 2004 as the first radar facility in Japan devoted to the observation of space debris. In order to detect the smaller debris, coherent integration is effective in improving SNR (Signal-to-Noise Ratio). However, it is difficult to apply coherent integration to real data because the motions of the targets are unknown. An effective algorithm is proposed for echo detection and orbit estimation of the faint echoes from space debris. The characteristics of the evaluation function are utilized by the algorithm. Experiments show the proposed algorithm improves SNR by 8.32dB and enables estimation of orbital parameters accurately to allow for re-tracking with a single radar.

  4. A New Algorithm for Detection of Cloudiness and Moon Affect Area

    NASA Astrophysics Data System (ADS)

    Dindar, Murat; Helhel, Selcuk; Ünal Akdemir, Kemal

    2016-07-01

    Cloud detection is a crucial issue for observatories already operating and during phase of the site selection. Sky Quality Meter (SQM) devices mostly use to determine parameters of the quality of sky such as cloudiness, light flux. But, those parameters do not give us exact information about the cloudiness and moon affects. In this study we improved a new cloudiness and moon affects area detection algorithm. The algorithm is based on image processing methods and different approaches applied to both day time and night time images to calculate the sky coverage. The new algorithm also implemented with Matlab by using the images taken by all sky camera located at TÜBİTAK National Observatory and results were given.

  5. Application of the TRUFAS detection algorithm to the first two runs of CoRoT

    NASA Astrophysics Data System (ADS)

    Régulo, Clara; Almenara, Jose M.; Deeg, Hans J.

    2009-02-01

    TRUFAS is a wavelet-based algorithm developed for the rapid detection of planetary transits in the frame of the COROT space mission. We present the application of this algorithm to the first two observing fields of CoRoT data. In these, CoRoT has observed a total of about 20000 stars. The first CoRoT observing run, IRa01, covers 2 months, February and March 2007, followed by the 5-months long run LRc01. TRUFAS is a very fast algorithm delivering reliable detections. Here we show the results when TRUFAS was applied to these first two sets of data. In the first run, IRa01, TRUFAS found 10 planet candidates and 143 eclipsing binaries and in the LRc01 10 planet candidates and 124 binaries, with a processing that lasted only one night.

  6. MEMS-based sensing and algorithm development for fall detection and gait analysis

    NASA Astrophysics Data System (ADS)

    Gupta, Piyush; Ramirez, Gabriel; Lie, Donald Y. C.; Dallas, Tim; Banister, Ron E.; Dentino, Andrew

    2010-02-01

    Falls by the elderly are highly detrimental to health, frequently resulting in injury, high medical costs, and even death. Using a MEMS-based sensing system, algorithms are being developed for detecting falls and monitoring the gait of elderly and disabled persons. In this study, wireless sensors utilize Zigbee protocols were incorporated into planar shoe insoles and a waist mounted device. The insole contains four sensors to measure pressure applied by the foot. A MEMS based tri-axial accelerometer is embedded in the insert and a second one is utilized by the waist mounted device. The primary fall detection algorithm is derived from the waist accelerometer. The differential acceleration is calculated from samples received in 1.5s time intervals. This differential acceleration provides the quantification via an energy index. From this index one may ascertain different gait and identify fall events. Once a pre-determined index threshold is exceeded, the algorithm will classify an event as a fall or a stumble. The secondary algorithm is derived from frequency analysis techniques. The analysis consists of wavelet transforms conducted on the waist accelerometer data. The insole pressure data is then used to underline discrepancies in the transforms, providing more accurate data for classifying gait and/or detecting falls. The range of the transform amplitude in the fourth iteration of a Daubechies-6 transform was found sufficient to detect and classify fall events.

  7. Credit card fraud detection: An application of the gene expression messy genetic algorithm

    SciTech Connect

    Kargupta, H.; Gattiker, J.R.; Buescher, K.

    1996-05-01

    This paper describes an application of the recently introduced gene expression messy genetic algorithm (GEMGA) (Kargupta, 1996) for detecting fraudulent transactions of credit cards. It also explains the fundamental concepts underlying the GEMGA in the light of the SEARCH (Search Envisioned As Relation and Class Hierarchizing) (Kargupta, 1995) framework.

  8. Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection

    ERIC Educational Resources Information Center

    Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas

    2011-01-01

    Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…

  9. Sideband Algorithm for Automatic Wind Turbine Gearbox Fault Detection and Diagnosis: Preprint

    SciTech Connect

    Zappala, D.; Tavner, P.; Crabtree, C.; Sheng, S.

    2013-01-01

    Improving the availability of wind turbines (WT) is critical to minimize the cost of wind energy, especially for offshore installations. As gearbox downtime has a significant impact on WT availabilities, the development of reliable and cost-effective gearbox condition monitoring systems (CMS) is of great concern to the wind industry. Timely detection and diagnosis of developing gear defects within a gearbox is an essential part of minimizing unplanned downtime of wind turbines. Monitoring signals from WT gearboxes are highly non-stationary as turbine load and speed vary continuously with time. Time-consuming and costly manual handling of large amounts of monitoring data represent one of the main limitations of most current CMSs, so automated algorithms are required. This paper presents a fault detection algorithm for incorporation into a commercial CMS for automatic gear fault detection and diagnosis. The algorithm allowed the assessment of gear fault severity by tracking progressive tooth gear damage during variable speed and load operating conditions of the test rig. Results show that the proposed technique proves efficient and reliable for detecting gear damage. Once implemented into WT CMSs, this algorithm can automate data interpretation reducing the quantity of information that WT operators must handle.

  10. Development of Outlier detection Algorithm Applicable to a Korean Surge-Gauge

    NASA Astrophysics Data System (ADS)

    Lee, Jun-Whan; Park, Sun-Cheon; Lee, Won-Jin; Lee, Duk Kee

    2016-04-01

    The Korea Meteorological Administration (KMA) is operating a surge-gauge (aerial ultrasonic type) at Ulleung-do to monitor tsunamis. And the National Institute of Meteorological Sciences (NIMS), KMA is developing a tsunami detection and observation system using this surge-gauge. Outliers resulting from a problem with the transmission and extreme events, which change the water level temporarily, are one of the most common discouraging problems in tsunami detection. Unlike a spike, multipoint outliers are difficult to detect clearly. Most of the previous studies used statistic values or signal processing methods such as wavelet transform and filter to detect the multipoint outliers, and used a continuous dataset. However, as the focus moved to a near real-time operation with a dataset that contains gaps, these methods are no longer tenable. In this study, we developed an outlier detection algorithm applicable to the Ulleung-do surge gauge where both multipoint outliers and missing data exist. Although only 9-point data and two arithmetic operations (plus and minus) are used, because of the newly developed keeping method, the algorithm is not only simple and fast but also effective in a non-continuous dataset. We calibrated 17 thresholds and conducted performance tests using the three month data from the Ulleung-do surge gauge. The results show that the newly developed despiking algorithm performs reliably in alleviating the outlier detecting problem.

  11. Detectability Thresholds and Optimal Algorithms for Community Structure in Dynamic Networks

    NASA Astrophysics Data System (ADS)

    Ghasemian, Amir; Zhang, Pan; Clauset, Aaron; Moore, Cristopher; Peel, Leto

    2016-07-01

    The detection of communities within a dynamic network is a common means for obtaining a coarse-grained view of a complex system and for investigating its underlying processes. While a number of methods have been proposed in the machine learning and physics literature, we lack a theoretical analysis of their strengths and weaknesses, or of the ultimate limits on when communities can be detected. Here, we study the fundamental limits of detecting community structure in dynamic networks. Specifically, we analyze the limits of detectability for a dynamic stochastic block model where nodes change their community memberships over time, but where edges are generated independently at each time step. Using the cavity method, we derive a precise detectability threshold as a function of the rate of change and the strength of the communities. Below this sharp threshold, we claim that no efficient algorithm can identify the communities better than chance. We then give two algorithms that are optimal in the sense that they succeed all the way down to this threshold. The first uses belief propagation, which gives asymptotically optimal accuracy, and the second is a fast spectral clustering algorithm, based on linearizing the belief propagation equations. These results extend our understanding of the limits of community detection in an important direction, and introduce new mathematical tools for similar extensions to networks with other types of auxiliary information.

  12. An improved algorithm for cloud base detection by ceilometer over the ice sheets

    NASA Astrophysics Data System (ADS)

    Van Tricht, K.; Gorodetskaya, I. V.; Lhermitte, S.; Turner, D. D.; Schween, J. H.; Van Lipzig, N. P. M.

    2013-11-01

    Optically thin ice clouds play an important role in polar regions due to their effect on cloud radiative impact and precipitation on the surface. Cloud bases can be detected by lidar-based ceilometers that run continuously and therefore have the potential to provide basic cloud statistics including cloud frequency, base height and vertical structure. Despite their importance, thin clouds are however not well detected by the standard cloud base detection algorithm of most ceilometers operational at Arctic and Antarctic stations. This paper presents the Polar Threshold (PT) algorithm that was developed to detect optically thin hydrometeor layers (optical depth τ ≥ 0.01). The PT algorithm detects the first hydrometeor layer in a vertical attenuated backscatter profile exceeding a predefined threshold in combination with noise reduction and averaging procedures. The optimal backscatter threshold of 3 × 10-4 km-1 sr-1 for cloud base detection was objectively derived based on a sensitivity analysis using data from Princess Elisabeth, Antarctica and Summit, Greenland. The algorithm defines cloudy conditions as any atmospheric profile containing a hydrometeor layer at least 50 m thick. A comparison with relative humidity measurements from radiosondes at Summit illustrates the algorithm's ability to significantly differentiate between clear sky and cloudy conditions. Analysis of the cloud statistics derived from the PT algorithm indicates a year-round monthly mean cloud cover fraction of 72% at Summit without a seasonal cycle. The occurrence of optically thick layers, indicating the presence of supercooled liquid, shows a seasonal cycle at Summit with a monthly mean summer peak of 40%. The monthly mean cloud occurrence frequency in summer at Princess Elisabeth is 47%, which reduces to 14% for supercooled liquid cloud layers. Our analyses furthermore illustrate the importance of optically thin hydrometeor layers located near the surface for both sites, with 87% of all

  13. Classification and determination of alcohol in gasoline using NIR spectroscopy and the successive projections algorithm for variable selection

    NASA Astrophysics Data System (ADS)

    Ouyang, Aiguo; Liu, Jun

    2013-02-01

    A methodology for the classification and determination of alcohol (methanol/ethanol) in gasoline using near-infrared reflectance spectrometry and variable selection was proposed. Methanol gasoline and ethanol gasoline were prepared in the laboratory and gasoline (93#) was acquired from a local gas station. Partial least squares (PLS) multivariate calibrations were used to predict methanol/ethanol content. Principal component analysis was used for spectrum classification, obtaining a desirable classification accuracy. Using this strategy, it was feasible to classify alcohol gasoline rapidly. Concerning the multivariate calibration models, the results show that PLS, successive projections algorithm (SPA)-PLS and genetic algorithm (GA)-PLS models are good for predicting methanol and ethanol contents in gasoline; the respective root-mean-square errors of prediction were 0.216 (PLS), 0.163 (SPA-PLS) and 0.210 v/v% (GA-PLS) for methanol gasoline, corresponding to 0.348, 0.235 and 0.203 for ethanol gasoline. The results obtained in this investigation suggest that the proposed methodology is a promising alternative for the determination of alcohol content in gasoline.

  14. A cloud detection algorithm using the downwelling infrared radiance measured by an infrared pyrometer of the ground-based microwave radiometer

    DOE PAGES

    Ahn, M. H.; Han, D.; Won, H. Y.; Morris, Victor R.

    2015-02-03

    For better utilization of the ground-based microwave radiometer, it is important to detect the cloud presence in the measured data. Here, we introduce a simple and fast cloud detection algorithm by using the optical characteristics of the clouds in the infrared atmospheric window region. The new algorithm utilizes the brightness temperature (Tb) measured by an infrared radiometer installed on top of a microwave radiometer. The two-step algorithm consists of a spectral test followed by a temporal test. The measured Tb is first compared with a predicted clear-sky Tb obtained by an empirical formula as a function of surface air temperaturemore » and water vapor pressure. For the temporal test, the temporal variability of the measured Tb during one minute compares with a dynamic threshold value, representing the variability of clear-sky conditions. It is designated as cloud-free data only when both the spectral and temporal tests confirm cloud-free data. Overall, most of the thick and uniform clouds are successfully detected by the spectral test, while the broken and fast-varying clouds are detected by the temporal test. The algorithm is validated by comparison with the collocated ceilometer data for six months, from January to June 2013. The overall proportion of correctness is about 88.3% and the probability of detection is 90.8%, which are comparable with or better than those of previous similar approaches. Two thirds of discrepancies occur when the new algorithm detects clouds while the ceilometer does not, resulting in different values of the probability of detection with different cloud-base altitude, 93.8, 90.3, and 82.8% for low, mid, and high clouds, respectively. Finally, due to the characteristics of the spectral range, the new algorithm is found to be insensitive to the presence of inversion layers.« less

  15. A cloud detection algorithm using the downwelling infrared radiance measured by an infrared pyrometer of the ground-based microwave radiometer

    NASA Astrophysics Data System (ADS)

    Ahn, M.-H.; Han, D.; Won, H. Y.; Morris, V.

    2015-02-01

    For better utilization of the ground-based microwave radiometer, it is important to detect the cloud presence in the measured data. Here, we introduce a simple and fast cloud detection algorithm by using the optical characteristics of the clouds in the infrared atmospheric window region. The new algorithm utilizes the brightness temperature (Tb) measured by an infrared radiometer installed on top of a microwave radiometer. The two-step algorithm consists of a spectral test followed by a temporal test. The measured Tb is first compared with a predicted clear-sky Tb obtained by an empirical formula as a function of surface air temperature and water vapor pressure. For the temporal test, the temporal variability of the measured Tb during one minute compares with a dynamic threshold value, representing the variability of clear-sky conditions. It is designated as cloud-free data only when both the spectral and temporal tests confirm cloud-free data. Overall, most of the thick and uniform clouds are successfully detected by the spectral test, while the broken and fast-varying clouds are detected by the temporal test. The algorithm is validated by comparison with the collocated ceilometer data for six months, from January to June 2013. The overall proportion of correctness is about 88.3% and the probability of detection is 90.8%, which are comparable with or better than those of previous similar approaches. Two thirds of discrepancies occur when the new algorithm detects clouds while the ceilometer does not, resulting in different values of the probability of detection with different cloud-base altitude, 93.8, 90.3, and 82.8% for low, mid, and high clouds, respectively. Finally, due to the characteristics of the spectral range, the new algorithm is found to be insensitive to the presence of inversion layers.

  16. A cloud detection algorithm using the downwelling infrared radiance measured by an infrared pyrometer of the ground-based microwave radiometer

    SciTech Connect

    Ahn, M. H.; Han, D.; Won, H. Y.; Morris, Victor R.

    2015-02-03

    For better utilization of the ground-based microwave radiometer, it is important to detect the cloud presence in the measured data. Here, we introduce a simple and fast cloud detection algorithm by using the optical characteristics of the clouds in the infrared atmospheric window region. The new algorithm utilizes the brightness temperature (Tb) measured by an infrared radiometer installed on top of a microwave radiometer. The two-step algorithm consists of a spectral test followed by a temporal test. The measured Tb is first compared with a predicted clear-sky Tb obtained by an empirical formula as a function of surface air temperature and water vapor pressure. For the temporal test, the temporal variability of the measured Tb during one minute compares with a dynamic threshold value, representing the variability of clear-sky conditions. It is designated as cloud-free data only when both the spectral and temporal tests confirm cloud-free data. Overall, most of the thick and uniform clouds are successfully detected by the spectral test, while the broken and fast-varying clouds are detected by the temporal test. The algorithm is validated by comparison with the collocated ceilometer data for six months, from January to June 2013. The overall proportion of correctness is about 88.3% and the probability of detection is 90.8%, which are comparable with or better than those of previous similar approaches. Two thirds of discrepancies occur when the new algorithm detects clouds while the ceilometer does not, resulting in different values of the probability of detection with different cloud-base altitude, 93.8, 90.3, and 82.8% for low, mid, and high clouds, respectively. Finally, due to the characteristics of the spectral range, the new algorithm is found to be insensitive to the presence of inversion layers.

  17. Effects of perceived efficacy and prospect of success on detection in the Guilty Actions Test.

    PubMed

    Zvi, Lisa; Nachson, Israel; Elaad, Eitan

    2015-01-01

    Two experiments were conducted in order to examine factors that might influence the motivation of guilty and informed innocent examinees to either cope or cooperate with the Guilty Actions Test. Guilty participants committed a mock-crime and informed innocent participants handled the critical items of the crime in an innocent context. In Experiment 1 the participants were led to believe that the prospects of being found innocent on the test were either high or low. In Experiment 2 the participants were led to believe that the test was either highly accurate or of questionable validity. Results indicated that for both guilty and informed innocent participants low prospects of success and low detection efficacy of the test were associated with enhanced physiological responses to the critical information, whereas high prospects of success and high detection efficacy were associated with attenuated physiological responses. Theoretical and practical implications of the results are discussed. PMID:25543067

  18. Fast multi-scale edge detection algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zang, Jie; Song, Yanjun; Li, Shaojuan; Luo, Guoyun

    2011-11-01

    The traditional edge detection algorithms have certain noise amplificat ion, making there is a big error, so the edge detection ability is limited. In analysis of the low-frequency signal of image, wavelet analysis theory can reduce the time resolution; under high time resolution for high-frequency signal of the image, it can be concerned about the transient characteristics of the signal to reduce the frequency resolution. Because of the self-adaptive for signal, the wavelet transform can ext ract useful informat ion from the edge of an image. The wavelet transform is at various scales, wavelet transform of each scale provides certain edge informat ion, so called mult i-scale edge detection. Multi-scale edge detection is that the original signal is first polished at different scales, and then detects the mutation of the original signal by the first or second derivative of the polished signal, and the mutations are edges. The edge detection is equivalent to signal detection in different frequency bands after wavelet decomposition. This article is use of this algorithm which takes into account both details and profile of image to detect the mutation of the signal at different scales, provided necessary edge information for image analysis, target recognition and machine visual, and achieved good results.

  19. An accurate algorithm to match imperfectly matched images for lung tumor detection without markers.

    PubMed

    Rozario, Timothy; Bereg, Sergey; Yan, Yulong; Chiu, Tsuicheng; Liu, Honghuan; Kearney, Vasant; Jiang, Lan; Mao, Weihua

    2015-05-08

    In order to locate lung tumors on kV projection images without internal markers, digitally reconstructed radiographs (DRRs) are created and compared with projection images. However, lung tumors always move due to respiration and their locations change on projection images while they are static on DRRs. In addition, global image intensity discrepancies exist between DRRs and projections due to their different image orientations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported to match imperfectly matched projection images and DRRs. The kV projection images were matched with different DRRs in two steps. Preprocessing was performed in advance to generate two sets of DRRs. The tumors were removed from the planning 3D CT for a single phase of planning 4D CT images using planning contours of tumors. DRRs of background and DRRs of tumors were generated separately for every projection angle. The first step was to match projection images with DRRs of background signals. This method divided global images into a matrix of small tiles and similarities were evaluated by calculating normalized cross-correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) was automatically optimized to keep the tumor within a single projection tile that had a bad matching with the corresponding DRR tile. A pixel-based linear transformation was determined by linear interpolations of tile transformation results obtained during tile matching. The background DRRs were transformed to the projection image level and subtracted from it. The resulting subtracted image now contained only the tumor. The second step was to register DRRs of tumors to the subtracted image to locate the tumor. This method was successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (BrainLAB) for dynamic tumor tracking on phantom studies. Radiation opaque markers were

  20. A Novel Zero Velocity Interval Detection Algorithm for Self-Contained Pedestrian Navigation System with Inertial Sensors.

    PubMed

    Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan

    2016-01-01

    Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner-Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266

  1. Unsupervised, low latency anomaly detection of algorithmically generated domain names by generative probabilistic modeling.

    PubMed

    Raghuram, Jayaram; Miller, David J; Kesidis, George

    2014-07-01

    We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511

  2. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.

    PubMed

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-01-01

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382

  3. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

    PubMed Central

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-01-01

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382

  4. A hyperspectral imagery anomaly detection algorithm based on local three-dimensional orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Wen, Gongjian

    2015-10-01

    Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.

  5. Unsupervised, low latency anomaly detection of algorithmically generated domain names by generative probabilistic modeling

    PubMed Central

    Raghuram, Jayaram; Miller, David J.; Kesidis, George

    2014-01-01

    We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511

  6. Unsupervised, low latency anomaly detection of algorithmically generated domain names by generative probabilistic modeling.

    PubMed

    Raghuram, Jayaram; Miller, David J; Kesidis, George

    2014-07-01

    We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates.

  7. Application of Artificial Bee Colony algorithm in TEC seismo-ionospheric anomalies detection

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2015-09-01

    In this study, the efficiency of Artificial Bee Colony (ABC) algorithm is investigated to detect the TEC (Total Electron Content) seismo-ionospheric anomalies around the time of some strong earthquakes including Chile (27 February 2010; 01 April 2014), Varzeghan (11 August 2012), Saravan (16 April 2013) and Papua New Guinea (29 March 2015). In comparison with other anomaly detection algorithms, ABC has a number of advantages which can be numerated as (1) detection of discord patterns in a large non linear data during a short time, (2) simplicity, (3) having less control parameters and (4) efficiently for solving multimodal and multidimensional optimization problems. Also the results of this study acknowledge the TEC time-series as a robust earthquake precursor.

  8. A blind detection scheme based on modified wavelet denoising algorithm for wireless optical communications

    NASA Astrophysics Data System (ADS)

    Li, Ruijie; Dang, Anhong

    2015-10-01

    This paper investigates a detection scheme without channel state information for wireless optical communication (WOC) systems in turbulence induced fading channel. The proposed scheme can effectively diminish the additive noise caused by background radiation and photodetector, as well as the intensity scintillation caused by turbulence. The additive noise can be mitigated significantly using the modified wavelet threshold denoising algorithm, and then, the intensity scintillation can be attenuated by exploiting the temporal correlation of the WOC channel. Moreover, to improve the performance beyond that of the maximum likelihood decision, the maximum a posteriori probability (MAP) criterion is considered. Compared with conventional blind detection algorithm, simulation results show that the proposed detection scheme can improve the signal-to-noise ratio (SNR) performance about 4.38 dB while the bit error rate and scintillation index (SI) are 1×10-6 and 0.02, respectively.

  9. Optimization of a Distributed Genetic Algorithm on a Cluster of Workstations for the Detection of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Bevilacqua, A.; Campanini, R.; Lanconelli, N.

    We have developed a method for the detection of clusters of microcalcifications in digital mammograms. Here, we present a genetic algorithm used to optimize the choice of the parameters in the detection scheme. The optimization has allowed the improvement of the performance, the detailed study of the influence of the various parameters on the performance and an accurate investigation of the behavior of the detection method on unknown cases. We reach a sensitivity of 96.2% with 0.7 false positive clusters per image on the Nijmegen database; we are also able to identify the most significant parameters. In addition, we have examined the feasibility of a distributed genetic algorithm implemented on a non-dedicated Cluster Of Workstations. We get very good results both in terms of quality and efficiency.

  10. Algorithm for detecting seam cracks in steel plates using a Gabor filter combination method.

    PubMed

    Choi, Doo-Chul; Jeon, Yong-Ju; Lee, Sang Jun; Yun, Jong Pil; Kim, Sang Woo

    2014-08-01

    Presently, product inspection based on vision systems is an important part of the steel-manufacturing industry. In this work, we focus on the detection of seam cracks in the edge region of steel plates. Seam cracks are generated in the vertical direction, and their width range is 0.2-0.6 mm. Moreover, the gray values of seam cracks are only 20-30 gray levels lower than those of the neighboring surface. Owing to these characteristics, we propose a new algorithm for detecting seam cracks using a Gabor filter combination method. To enhance the performance, we extracted features of seam cracks and employed a support vector machine classifier. The experimental results show that the proposed algorithm is suitable for detecting seam cracks.

  11. A new algorithm for evaluating 3D curvature and curvature gradient for improved fracture detection

    NASA Astrophysics Data System (ADS)

    Di, Haibin; Gao, Dengliang

    2014-09-01

    In 3D seismic interpretation, both curvature and curvature gradient are useful seismic attributes for structure characterization and fault detection in the subsurface. However, the existing algorithms are computationally intensive and limited by the lateral resolution for steeply-dipping formations. This study presents new and robust volume-based algorithms that evaluate both curvature and curvature gradient attributes more accurately and effectively. The algorithms first instantaneously fit a local surface to seismic data and then compute attributes using the spatial derivatives of the built surface. Specifically, the curvature algorithm constructs a quadratic surface by using a rectangle 9-node grid cell, whereas the curvature gradient algorithm builds a cubic surface by using a diamond 13-node grid cell. A dip-steering approach based on 3D complex seismic trace analysis is implemented to enhance the accuracy of surface construction and to reduce computational time. Applications to two 3D seismic surveys demonstrate the accuracy and efficiency of the new curvature and curvature gradient algorithms for characterizing faults and fractures in fractured reservoirs.

  12. Automatic, Real-Time Algorithms for Anomaly Detection in High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Srivastava, A. N.; Nemani, R. R.; Votava, P.

    2008-12-01

    Earth observing satellites are generating data at an unprecedented rate, surpassing almost all other data intensive applications. However, most of the data that arrives from the satellites is not analyzed directly. Rather, multiple scientific teams analyze only a small fraction of the total data available in the data stream. Although there are many reasons for this situation one paramount concern is developing algorithms and methods that can analyze the vast, high dimensional, streaming satellite images. This paper describes a new set of methods that are among the fastest available algorithms for real-time anomaly detection. These algorithms were built to maximize accuracy and speed for a variety of applications in fields outside of the earth sciences. However, our studies indicate that with appropriate modifications, these algorithms can be extremely valuable for identifying anomalies rapidly using only modest computational power. We review two algorithms which are used as benchmarks in the field: Orca, One-Class Support Vector Machines and discuss the anomalies that are discovered in MODIS data taken over the Central California region. We are especially interested in automatic identification of disturbances within the ecosystems (e,g, wildfires, droughts, floods, insect/pest damage, wind damage, logging). We show the scalability of the algorithms and demonstrate that with appropriately adapted technology, the dream of real-time analysis can be made a reality.

  13. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    NASA Astrophysics Data System (ADS)

    Suorsa, Raymond E.; Sridhar, Banavar

    1991-03-01

    The automation of rotorcraft low-altitude flight presents challenging problems in control imaging sensors and image understanding. A critical element in this problem is the ability to detect and locate obstacles in a helicopter''s intended flight path and using on-board sensors modify the nominal vehicle trajectory to avoid contact with the detected obstacles. This paper describes the validation facility being used at Ames to test vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight and presents some early validation results.

  14. Context exploitation in intelligence, surveillance, and reconnaissance for detection and tracking algorithms

    NASA Astrophysics Data System (ADS)

    Tucker, Jonathan D.; Stanfill, S. Robert

    2015-05-01

    Intelligence, Surveillance, and Reconnaissance (ISR) missions involve complex analysis of sensor data that can benefit from the exploitation of geographically aligned context. In this paper we discuss our approach to utilizing geo-registered imagery and context for the purpose of aiding ISR detection and tracking applications. Specifically this includes rendering context masks on imagery, increasing the speed at which detection algorithms process data, providing a way to intelligently control detection density for given ground areas, identifying difficult traffic terrain, refining peak suppression for congested areas, reducing target center of mass location errors, and increasing track coverage and duration through track prediction error robustness.

  15. Algorithms for detecting and predicting influenza outbreaks: metanarrative review of prospective evaluations

    PubMed Central

    Spreco, A; Timpka, T

    2016-01-01

    Objectives Reliable monitoring of influenza seasons and pandemic outbreaks is essential for response planning, but compilations of reports on detection and prediction algorithm performance in influenza control practice are largely missing. The aim of this study is to perform a metanarrative review of prospective evaluations of influenza outbreak detection and prediction algorithms restricted settings where authentic surveillance data have been used. Design The study was performed as a metanarrative review. An electronic literature search was performed, papers selected and qualitative and semiquantitative content analyses were conducted. For data extraction and interpretations, researcher triangulation was used for quality assurance. Results Eight prospective evaluations were found that used authentic surveillance data: three studies evaluating detection and five studies evaluating prediction. The methodological perspectives and experiences from the evaluations were found to have been reported in narrative formats representing biodefence informatics and health policy research, respectively. The biodefence informatics narrative having an emphasis on verification of technically and mathematically sound algorithms constituted a large part of the reporting. Four evaluations were reported as health policy research narratives, thus formulated in a manner that allows the results to qualify as policy evidence. Conclusions Awareness of the narrative format in which results are reported is essential when interpreting algorithm evaluations from an infectious disease control practice perspective. PMID:27154479

  16. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  17. Automatic target detection algorithm for foliage-penetrating ultrawideband SAR data using split spectral analysis

    NASA Astrophysics Data System (ADS)

    Damarla, Thyagaraju; Kapoor, Ravinder; Ressler, Marc A.

    1999-07-01

    We present an automatic target detection (ATD) algorithm for foliage penetrating (FOPEN) ultra-wideband (UWB) synthetic aperture radar (SAR) data using split spectral analysis. Split spectral analysis is commonly used in the ultrasonic, non-destructive evaluation of materials using wide band pulses for flaw detection. In this paper, we show the application of split spectral analysis for detecting obscured targets in foliage using UWB pulse returns to discriminate targets from foliage, the data spectrum is split into several bands, namely, 20 to 75, 75 to 150, ..., 825 to 900 MHz. An ATD algorithm is developed based on the relative energy levels in various bands, the number of bands containing significant energy (spread of energy), and chip size (number of crossrange and range bins). The algorithm is tested on the (FOPEN UWB SAR) data of foliage and vehicles obscured by foliage collected at Aberdeen Proving Ground, MD. The paper presents various split spectral parameters used in the algorithm and discusses the rationale for their use.

  18. Evaluation of automatic feature detection algorithms in EEG: application to interburst intervals.

    PubMed

    Chauvet, Pierre E; Tich, Sylvie Nguyen The; Schang, Daniel; Clément, Alain

    2014-11-01

    In this paper, we present a new method to compare and improve algorithms for feature detection in neonatal EEG. The method is based on the algorithm׳s ability to compute accurate statistics to predict the results of EEG visual analysis. This method is implemented inside a Java software called EEGDiag, as part of an e-health Web portal dedicated to neonatal EEG. EEGDiag encapsulates a component-based implementation of the detection algorithms called analyzers. Each analyzer is defined by a list of modules executed sequentially. As the libraries of modules are intended to be enriched by its users, we developed a process to evaluate the performance of new modules and analyzers using a database of expertized and categorized EEGs. The evaluation is based on the Davies-Bouldin index (DBI) which measures the quality of cluster separation, so that it will ease the building of classifiers on risk categories. For the first application we tested this method on the detection of interburst intervals (IBI) using a database of 394 EEG acquired on premature newborns. We have defined a class of IBI detectors based on a threshold of the standard deviation on contiguous short time windows, inspired by previous work. Then we determine which detector and what threshold values are the best regarding DBI, as well as the robustness of this choice. This method allows us to make counter-intuitive choices, such as removing the 50 Hz filter (power supply) to save time. PMID:25212119

  19. Unsupervised algorithms for intrusion detection and identification in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2009-05-01

    In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts

  20. A discrete artificial bee colony algorithm for detecting transcription factor binding sites in DNA sequences.

    PubMed

    Karaboga, D; Aslan, S

    2016-01-01

    The great majority of biological sequences share significant similarity with other sequences as a result of evolutionary processes, and identifying these sequence similarities is one of the most challenging problems in bioinformatics. In this paper, we present a discrete artificial bee colony (ABC) algorithm, which is inspired by the intelligent foraging behavior of real honey bees, for the detection of highly conserved residue patterns or motifs within sequences. Experimental studies on three different data sets showed that the proposed discrete model, by adhering to the fundamental scheme of the ABC algorithm, produced competitive or better results than other metaheuristic motif discovery techniques. PMID:27173272

  1. Advanced Oil Spill Detection Algorithms For Satellite Based Maritime Environment Monitoring

    NASA Astrophysics Data System (ADS)

    Radius, Andrea; Azevedo, Rui; Sapage, Tania; Carmo, Paulo

    2013-12-01

    During the last years, the increasing pollution occurrence and the alarming deterioration of the environmental health conditions of the sea, lead to the need of global monitoring capabilities, namely for marine environment management in terms of oil spill detection and indication of the suspected polluter. The sensitivity of Synthetic Aperture Radar (SAR) to the different phenomena on the sea, especially for oil spill and vessel detection, makes it a key instrument for global pollution monitoring. The SAR performances in maritime pollution monitoring are being operationally explored by a set of service providers on behalf of the European Maritime Safety Agency (EMSA), which has launched in 2007 the CleanSeaNet (CSN) project - a pan-European satellite based oil monitoring service. EDISOFT, which is from the beginning a service provider for CSN, is continuously investing in R&D activities that will ultimately lead to better algorithms and better performance on oil spill detection from SAR imagery. This strategy is being pursued through EDISOFT participation in the FP7 EC Sea-U project and in the Automatic Oil Spill Detection (AOSD) ESA project. The Sea-U project has the aim to improve the current state of oil spill detection algorithms, through the informative content maximization obtained with data fusion, the exploitation of different type of data/ sensors and the development of advanced image processing, segmentation and classification techniques. The AOSD project is closely related to the operational segment, because it is focused on the automation of the oil spill detection processing chain, integrating auxiliary data, like wind information, together with image and geometry analysis techniques. The synergy between these different objectives (R&D versus operational) allowed EDISOFT to develop oil spill detection software, that combines the operational automatic aspect, obtained through dedicated integration of the processing chain in the existing open source NEST

  2. A multi-objective discrete cuckoo search algorithm with local search for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Yanheng; Li, Bin

    2016-03-01

    Detecting community is a challenging task in analyzing networks. Solving community detection problem by evolutionary algorithm is a heated topic in recent years. In this paper, a multi-objective discrete cuckoo search algorithm with local search (MDCL) for community detection is proposed. To the best of our knowledge, it is first time to apply cuckoo search algorithm for community detection. Two objective functions termed as negative ratio association and ratio cut are to be minimized. These two functions can break through the modularity limitation. In the proposed algorithm, the nest location updating strategy and abandon operator of cuckoo are redefined in discrete form. A local search strategy and a clone operator are proposed to obtain the optimal initial population. The experimental results on synthetic and real-world networks show that the proposed algorithm has better performance than other algorithms and can discover the higher quality community structure without prior information.

  3. A new morphological anomaly detection algorithm for hyperspectral images and its GPU implementation

    NASA Astrophysics Data System (ADS)

    Paz, Abel; Plaza, Antonio

    2011-10-01

    Anomaly detection is considered a very important task for hyperspectral data exploitation. It is now routinely applied in many application domains, including defence and intelligence, public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent explosion in the amount and dimensionality of hyperspectral imagery, this problem calls for the incorporation of parallel computing techniques. In the past, clusters of computers have offered an attractive solution for fast anomaly detection in hyperspectral data sets already transmitted to Earth. However, these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time, i.e., at the same time as the data is collected by the sensor. An exciting new development in the field of commodity computing is the emergence of commodity graphics processing units (GPUs), which can now bridge the gap towards on-board processing of remotely sensed hyperspectral data. In this paper, we develop a new morphological algorithm for anomaly detection in hyperspectral images along with an efficient GPU implementation of the algorithm. The algorithm is implemented on latest-generation GPU architectures, and evaluated with regards to other anomaly detection algorithms using hyperspectral data collected by NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the World Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in the WTC complex. The proposed GPU implementation achieves real-time performance in the considered case study.

  4. Assessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module

    NASA Technical Reports Server (NTRS)

    Gay, Robert S.

    2011-01-01

    Orion Crew Module (CM) touchdown detection is critical to activating the post-landing sequence that safe?s the Reaction Control Jets (RCS), ensures that the vehicle remains upright, and establishes communication with recovery forces. In order to accommodate safe landing of an unmanned vehicle or incapacitated crew, an onboard automated detection system is required. An Orion-specific touchdown detection algorithm was developed and evaluated to differentiate landing events from in-flight events. The proposed method will be used to initiate post-landing cutting of the parachute riser lines, to prevent CM rollover, and to terminate RCS jet firing prior to submersion. The RCS jets continue to fire until touchdown to maintain proper CM orientation with respect to the flight path and to limit impact loads, but have potentially hazardous consequences if submerged while firing. The time available after impact to cut risers and initiate the CM Up-righting System (CMUS) is measured in minutes, whereas the time from touchdown to RCS jet submersion is a function of descent velocity, sea state conditions, and is often less than one second. Evaluation of the detection algorithms was performed for in-flight events (e.g. descent under chutes) using hi-fidelity rigid body analyses in the Decelerator Systems Simulation (DSS), whereas water impacts were simulated using a rigid finite element model of the Orion CM in LS-DYNA. Two touchdown detection algorithms were evaluated with various thresholds: Acceleration magnitude spike detection, and Accumulated velocity changed (over a given time window) spike detection. Data for both detection methods is acquired from an onboard Inertial Measurement Unit (IMU) sensor. The detection algorithms were tested with analytically generated in-flight and landing IMU data simulations. The acceleration spike detection proved to be faster while maintaining desired safety margin. Time to RCS jet submersion was predicted analytically across a series of

  5. Facilitation of Third-party Development of Advanced Algorithms for Explosive Detection Using Workshops and Grand Challenges

    SciTech Connect

    Martz, H E; Crawford, C R; Beaty, J S; Castanon, D

    2011-02-15

    The US Department of Homeland Security (DHS) has requirements for future explosive detection scanners that include dealing with a larger number of threats, higher probability of detection, lower false alarm rates and lower operating costs. One tactic that DHS is pursuing to achieve these requirements is to augment the capabilities of the established security vendors with third-party algorithm developers. The purposes of this presentation are to review DHS's objectives for involving third parties in the development of advanced algorithms and then to discuss how these objectives are achieved using workshops and grand challenges. Terrorists are still trying and they are getting more sophisticated. There is a need to increase the number of smart people working on homeland security. Augmenting capabilities and capacities of system vendors with third-parties is one tactic. Third parties can be accessed via workshops and grand challenges. Successes have been achieved to date. There are issues that need to be resolved to further increase third party involvement.

  6. Penalty Dynamic Programming Algorithm for Dim Targets Detection in Sensor Systems

    PubMed Central

    Huang, Dayu; Xue, Anke; Guo, Yunfei

    2012-01-01

    In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations. PMID:22666074

  7. Identification and detection of gaseous effluents from hyperspectral imagery using invariant algorithms

    NASA Astrophysics Data System (ADS)

    O'Donnell, Erin M.; Messinger, David W.; Salvaggio, Carl; Schott, John R.

    2004-08-01

    The ability to detect and identify effluent gases is, and will continue to be, of great importance. This would not only aid in the regulation of pollutants but also in treaty enforcement and monitoring the production of weapons. Considering these applications, finding a way to remotely investigate a gaseous emission is highly desirable. This research utilizes hyperspectral imagery in the infrared region of the electromagnetic spectrum to evaluate an invariant method of detecting and identifying gases within a scene. The image is evaluated on a pixel-by-pixel basis and is studied at the subpixel level. A library of target gas spectra is generated using a simple slab radiance model. This results in a more robust description of gas spectra which are representative of real-world observations. This library is the subspace utilized by the detection and identification algorithms. The subspace will be evaluated for the set of basis vectors that best span the subspace. The Lee algorithm will be used to determine the set of basis vectors, which implements the Maximum Distance Method (MaxD). A Generalized Likelihood Ratio Test (GLRT) determines whether or not the pixel contains the target. The target can be either a single species or a combination of gases. Synthetically generated scenes will be used for this research. This work evaluates whether the Lee invariant algorithm will be effective in the gas detection and identification problem.

  8. Improving lesion detectability in PET imaging with a penalized likelihood reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Wangerin, Kristen A.; Ahn, Sangtae; Ross, Steven G.; Kinahan, Paul E.; Manjeshwar, Ravindra M.

    2015-03-01

    Ordered Subset Expectation Maximization (OSEM) is currently the most widely used image reconstruction algorithm for clinical PET. However, OSEM does not necessarily provide optimal image quality, and a number of alternative algorithms have been explored. We have recently shown that a penalized likelihood image reconstruction algorithm using the relative difference penalty, block sequential regularized expectation maximization (BSREM), achieves more accurate lesion quantitation than OSEM, and importantly, maintains acceptable visual image quality in clinical wholebody PET. The goal of this work was to evaluate lesion detectability with BSREM versus OSEM. We performed a twoalternative forced choice study using 81 patient datasets with lesions of varying contrast inserted into the liver and lung. At matched imaging noise, BSREM and OSEM showed equivalent detectability in the lungs, and BSREM outperformed OSEM in the liver. These results suggest that BSREM provides not only improved quantitation and clinically acceptable visual image quality as previously shown but also improved lesion detectability compared to OSEM. We then modeled this detectability study, applying both nonprewhitening (NPW) and channelized Hotelling (CHO) model observers to the reconstructed images. The CHO model observer showed good agreement with the human observers, suggesting that we can apply this model to future studies with varying simulation and reconstruction parameters.

  9. TRUFAS, a wavelet-based algorithm for the rapid detection of planetary transits

    NASA Astrophysics Data System (ADS)

    Régulo, C.; Almenara, J. M.; Alonso, R.; Deeg, H.; Roca Cortés, T.

    2007-06-01

    Aims: We describe a fast, robust and automatic detection algorithm, TRUFAS, and apply it to data that are being expected from the CoRoT mission. Methods: The procedure proposed for the detection of planetary transits in light curves works in two steps: 1) a continuous wavelet transformation of the detrended light curve with posterior selection of the optimum scale for transit detection, and 2) a period search in that selected wavelet transformation. The detrending of the light curves are based on Fourier filtering or a discrete wavelet transformation. TRUFAS requires the presence of at least 3 transit events in the data. Results: The proposed algorithm is shown to identify reliably and quickly the transits that had been included in a standard set of 999 light curves that simulate CoRoT data. Variations in the pre-processing of the light curves and in the selection of the scale of the wavelet transform have only little effect on TRUFAS' results. Conclusions: TRUFAS is a robust and quick transit detection algorithm, especially well suited for the analysis of very large volumes of data from space or ground-based experiments, with long enough durations for the target-planets to produce multiple transit events.

  10. Detecting a Singleton Attractor in a Boolean Network Utilizing SAT Algorithms

    NASA Astrophysics Data System (ADS)

    Tamura, Takeyuki; Akutsu, Tatsuya

    The Boolean network (BN) is a mathematical model of genetic networks. It is known that detecting a singleton attractor, which is also called a fixed point, is NP-hard even for AND/OR BNs (i.e., BNs consisting of AND/OR nodes), where singleton attractors correspond to steady states. Though a naive algorithm can detect a singleton attractor for an AND/OR BN in O(n 2n) time, no O((2-ε)n) (ε > 0) time algorithm was known even for an AND/OR BN with non-restricted indegree, where n is the number of nodes in a BN. In this paper, we present an O(1.787n) time algorithm for detecting a singleton attractor of a given AND/OR BN, along with related results. We also show that detection of a singleton attractor in a BN with maximum indegree two is NP-hard and can be polynomially reduced to a satisfiability problem.

  11. Automatic metal parts inspection: Use of thermographic images and anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Benmoussat, M. S.; Guillaume, M.; Caulier, Y.; Spinnler, K.

    2013-11-01

    A fully-automatic approach based on the use of induction thermography and detection algorithms is proposed to inspect industrial metallic parts containing different surface and sub-surface anomalies such as open cracks, open and closed notches with different sizes and depths. A practical experimental setup is developed, where lock-in and pulsed thermography (LT and PT, respectively) techniques are used to establish a dataset of thermal images for three different mockups. Data cubes are constructed by stacking up the temporal sequence of thermogram images. After the reduction of the data space dimension by means of denoising and dimensionality reduction methods; anomaly detection algorithms are applied on the reduced data cubes. The dimensions of the reduced data spaces are automatically calculated with arbitrary criterion. The results show that, when reduced data cubes are used, the anomaly detection algorithms originally developed for hyperspectral data, the well-known Reed and Xiaoli Yu detector (RX) and the regularized adaptive RX (RARX), give good detection performances for both surface and sub-surface defects in a non-supervised way.

  12. Scalable Algorithms for Unsupervised Classification and Anomaly Detection in Large Geospatiotemporal Data Sets

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    The increasing availability of high-resolution geospatiotemporal datasets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery and mining of ecological data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe some unsupervised knowledge discovery and anomaly detection approaches based on highly scalable parallel algorithms for k-means clustering and singular value decomposition, consider a few practical applications thereof to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  13. A consensus algorithm for approximate string matching and its application to QRS complex detection

    NASA Astrophysics Data System (ADS)

    Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.

    2016-08-01

    In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.

  14. Radiation anomaly detection algorithms for field-acquired gamma energy spectra

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen

    2015-08-01

    The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.

  15. Shadow Detection from Very High Resoluton Satellite Image Using Grabcut Segmentation and Ratio-Band Algorithms

    NASA Astrophysics Data System (ADS)

    Kadhim, N. M. S. M.; Mourshed, M.; Bray, M. T.

    2015-03-01

    Very-High-Resolution (VHR) satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour), the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates significant performance of

  16. A combined algorithm for T-wave alternans qualitative detection and quantitative measurement

    PubMed Central

    2013-01-01

    Background T-wave alternans (TWA) provides a noninvasive and clinically useful marker for the risk of sudden cardiac death (SCD). Current most widely used TWA detection algorithms work in two different domains: time and frequency. The disadvantage of the spectral analytical techniques is that they treat the alternans signal as a stationary wave with a constant amplitude and a phase. They cannot detect non-stationary characteristics of the signal. The temporal domain methods are sensitive to the alignment of the T-waves. In this study, we sought to develop a robust combined algorithm (CA) to assess T-wave alternans, which can qualitatively detect and quantitatively measure TWA in time domain. Methods The T wave sequences were extracted and the total energy of each T wave within the specified time-frequency region was calculated. The rank-sum test was applied to the ranked energy sequences of T waves to detect TWA qualitatively. The ECG containing TWA was quantitatively analyzed with correlation method. Results Simulation test result proved a mean sensitivity of 91.2% in detecting TWA, and for the SNR not less than 30 dB, the accuracy rate of detection achieved 100%. The clinical data experiment showed that the results from this method vs. spectral method had the correlation coefficients of 0.96. Conclusions A novel TWA analysis algorithm utilizing the wavelet transform and correlation technique is presented in this paper. TWAs are not only correctly detected qualitatively in frequency domain by energy value of T waves, but the alternans frequency and amplitude in temporal domain are measured quantitatively. PMID:23311454

  17. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    NASA Astrophysics Data System (ADS)

    Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.

    2003-02-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  18. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

    2014-09-01

    One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo99 used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 106 cm-1) in a tube, their delta reactivities are the still

  19. The evaluation of failure detection and isolation algorithms for restructurable control

    NASA Technical Reports Server (NTRS)

    Motyka, P.; Bonnice, W.; Hall, S.; Wagner, E.

    1984-01-01

    Three failure detection and identification techniques were compared to determine their usefulness in detecting and isolating failures in an aircraft flight control system; excluding sensor and flight control computer failures. The algorithms considered were the detection filter, the Generalized Likelihood Ratio test and the Orthogonal Series Generalized Likelihood Ratio test. A modification to the basic detection filter is also considered which uses secondary filtering of the residuals to produce unidirectional failure signals. The algorithms were evaluated by testing their ability to detect and isolate control surface failures in a nonlinear simulation of a C-130 aircraft. It was found that failures of some aircraft controls are difficult to distinguish because they have a similar effect on the dynamics of the vehicle. Quantitative measures for evaluating the distinguishability of failures are considered. A system monitoring strategy for implementing the failure detection and identification techniques was considered. This strategy identified the mix of direct measurement of failures versus the computation of failure necessary for implementation of the technology in an aircraft system.

  20. How Does Sampling Methodology Influence Molecular Detection and Isolation Success in Influenza A Virus Field Studies?

    PubMed

    Latorre-Margalef, Neus; Avril, Alexis; Tolf, Conny; Olsen, Björn; Waldenström, Jonas

    2016-02-01

    Wild waterfowl are important reservoir hosts for influenza A virus (IAV) and a potential source of spillover infections in other hosts, including poultry and swine. The emergence of highly pathogenic avian influenza (HPAI) viruses, such as H5N1 and H5N8, and subsequent spread along migratory flyways prompted the initiation of several programs in Europe, North America, and Africa to monitor circulation of HPAI and low-pathogenicity precursor viruses (low-pathogenicity avian influenza [LPAI] viruses). Given the costs of maintaining such programs, it is essential to establish best practice for field methodologies to provide robust data for epidemiological interpretation. Here, we use long-term surveillance data from a single site to evaluate the influence of a number of parameters on virus detection and isolation of LPAI viruses. A total of 26,586 samples (oropharyngeal, fecal, and cloacal) collected from wild mallards were screened by real-time PCR, and positive samples were subjected to isolation in embryonated chicken eggs. The LPAI virus detection rate was influenced by the sample type: cloacal/fecal samples showed a consistently higher detection rate and lower cycle threshold (Ct) value than oropharyngeal samples. Molecular detection was more sensitive than isolation, and virus isolation success was proportional to the number of RNA copies in the sample. Interestingly, for a given Ct value, the isolation success was lower in samples from adult birds than in those from juveniles. Comparing the results of specific real-time reverse transcriptase (RRT)-PCRs and of isolation, it was clear that coinfections were common in the investigated birds. The effects of sample type and detection methods warrant some caution in interpretation of the surveillance data. PMID:26655759

  1. How Does Sampling Methodology Influence Molecular Detection and Isolation Success in Influenza A Virus Field Studies?

    PubMed Central

    Avril, Alexis; Tolf, Conny; Olsen, Björn

    2015-01-01

    Wild waterfowl are important reservoir hosts for influenza A virus (IAV) and a potential source of spillover infections in other hosts, including poultry and swine. The emergence of highly pathogenic avian influenza (HPAI) viruses, such as H5N1 and H5N8, and subsequent spread along migratory flyways prompted the initiation of several programs in Europe, North America, and Africa to monitor circulation of HPAI and low-pathogenicity precursor viruses (low-pathogenicity avian influenza [LPAI] viruses). Given the costs of maintaining such programs, it is essential to establish best practice for field methodologies to provide robust data for epidemiological interpretation. Here, we use long-term surveillance data from a single site to evaluate the influence of a number of parameters on virus detection and isolation of LPAI viruses. A total of 26,586 samples (oropharyngeal, fecal, and cloacal) collected from wild mallards were screened by real-time PCR, and positive samples were subjected to isolation in embryonated chicken eggs. The LPAI virus detection rate was influenced by the sample type: cloacal/fecal samples showed a consistently higher detection rate and lower cycle threshold (Ct) value than oropharyngeal samples. Molecular detection was more sensitive than isolation, and virus isolation success was proportional to the number of RNA copies in the sample. Interestingly, for a given Ct value, the isolation success was lower in samples from adult birds than in those from juveniles. Comparing the results of specific real-time reverse transcriptase (RRT)-PCRs and of isolation, it was clear that coinfections were common in the investigated birds. The effects of sample type and detection methods warrant some caution in interpretation of the surveillance data. PMID:26655759

  2. Automatic detection of ECG electrode misplacement: a tale of two algorithms.

    PubMed

    Xia, Henian; Garcia, Gabriel A; Zhao, Xiaopeng

    2012-09-01

    Artifacts in an electrocardiogram (ECG) due to electrode misplacement can lead to wrong diagnoses. Various computer methods have been developed for automatic detection of electrode misplacement. Here we reviewed and compared the performance of two algorithms with the highest accuracies on several databases from PhysioNet. These algorithms were implemented into four models. For clean ECG records with clearly distinguishable waves, the best model produced excellent accuracies (> = 98.4%) for all misplacements except the LA/LL interchange (87.4%). However, the accuracies were significantly lower for records with noise and arrhythmias. Moreover, when the algorithms were tested on a database that was independent from the training database, the accuracies may be poor. For the worst scenario, the best accuracies for different types of misplacements ranged from 36.1% to 78.4%. A large number of ECGs of various qualities and pathological conditions are collected every day. To improve the quality of health care, the results of this paper call for more robust and accurate algorithms for automatic detection of electrode misplacement, which should be developed and tested using a database of extensive ECG records.

  3. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  4. Field demonstration of a scanning lidar and detection algorithm for spatially mapping honeybees for biological detection of land mines.

    PubMed

    Carlsten, Erik S; Wicks, Geoffrey R; Repasky, Kevin S; Carlsten, John L; Bromenshenk, Jerry J; Henderson, Colin B

    2011-05-10

    A biological detection scheme based on the natural foraging behavior of conditioned honeybees for detecting chemical vapor plumes associated with unexploded ordnance devices utilizes a scanning lidar instrument to provide spatial mapping of honeybee densities. The scanning light detection and ranging (lidar) instrument uses a frequency doubled Nd:YAG microchip laser to send out a series of pulses at a pulse repetition rate of 6.853 kHz. The scattered light is monitored to produce a discrete time series for each range. This discrete time series is then processed using an efficient algorithm that is able to isolate and identify the return signal from a honeybee in a cluttered environment, producing spatially mapped honeybee densities. Two field experiments were performed with the scanning lidar instrument that demonstrate good correlation between the honeybee density maps and the target locations. PMID:21556112

  5. Enhanced sensory processing accompanies successful detection of change for real-world sounds.

    PubMed

    Gregg, Melissa K; Snyder, Joel S

    2012-08-01

    Change deafness is the inability of listeners to detect changes occurring in their auditory environment. It is a matter of some debate whether change deafness occurs because of a failure of auditory-specific processes or a failure of more general semantic/verbal memory. To address this issue, we measured event-related potentials (ERPs) to pairs of scenes consisting of naturalistic auditory objects while listeners made a same/different judgment for scenes presented before and after an interruption. ERPs to the post-change scene revealed an enhanced early sensory response (N1) and an enhanced late positivity (P3) for detected changes. Change detection performance was better when there was a large acoustic spread among the objects within Scenes 1 and 2, suggesting that the deficits reflected by the ERP components during change deafness are related to successfully segregating the pre- and post-change objects. We also found that a separate sensory response (P2) reflects implicit, unconscious change detection. Overall, the results provide evidence that auditory-specific sensory processing is critical for both explicit and implicit change detection in natural auditory scenes.

  6. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms

    PubMed Central

    Topouzelis, Konstantinos N.

    2008-01-01

    This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR) for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.

  7. Decomposition-based multiobjective evolutionary algorithm for community detection in dynamic social networks.

    PubMed

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.

  8. An efficient moving target detection algorithm based on sparsity-aware spectrum estimation.

    PubMed

    Shen, Mingwei; Wang, Jie; Wu, Di; Zhu, Daiyin

    2014-09-12

    In this paper, an efficient direct data domain space-time adaptive processing (STAP) algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results.

  9. An effective detection algorithm for region duplication forgery in digital images

    NASA Astrophysics Data System (ADS)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  10. A multi-split mapping algorithm for circular RNA, splicing, trans-splicing and fusion detection.

    PubMed

    Hoffmann, Steve; Otto, Christian; Doose, Gero; Tanzer, Andrea; Langenberger, David; Christ, Sabina; Kunz, Manfred; Holdt, Lesca M; Teupser, Daniel; Hackermüller, Jörg; Stadler, Peter F

    2014-02-10

    Numerous high-throughput sequencing studies have focused on detecting conventionally spliced mRNAs in RNA-seq data. However, non-standard RNAs arising through gene fusion, circularization or trans-splicing are often neglected. We introduce a novel, unbiased algorithm to detect splice junctions from single-end cDNA sequences. In contrast to other methods, our approach accommodates multi-junction structures. Our method compares favorably with competing tools for conventionally spliced mRNAs and, with a gain of up to 40% of recall, systematically outperforms them on reads with multiple splits, trans-splicing and circular products. The algorithm is integrated into our mapping tool segemehl (http://www.bioinf.uni-leipzig.de/Software/segemehl/).

  11. An efficient moving target detection algorithm based on sparsity-aware spectrum estimation.

    PubMed

    Shen, Mingwei; Wang, Jie; Wu, Di; Zhu, Daiyin

    2014-01-01

    In this paper, an efficient direct data domain space-time adaptive processing (STAP) algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results. PMID:25222035

  12. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve

    NASA Astrophysics Data System (ADS)

    Xu, Lili; Luo, Shuqian

    2010-11-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  13. Optimized Swinging Door Algorithm for Wind Power Ramp Event Detection: Preprint

    SciTech Connect

    Cui, Mingjian; Zhang, Jie; Florita, Anthony R.; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang

    2015-08-06

    Significant wind power ramp events (WPREs) are those that influence the integration of wind power, and they are a concern to the continued reliable operation of the power grid. As wind power penetration has increased in recent years, so has the importance of wind power ramps. In this paper, an optimized swinging door algorithm (SDA) is developed to improve ramp detection performance. Wind power time series data are segmented by the original SDA, and then all significant ramps are detected and merged through a dynamic programming algorithm. An application of the optimized SDA is provided to ascertain the optimal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas (ERCOT) are used to evaluate the proposed optimized SDA.

  14. Quantitative detection of defects based on Markov-PCA-BP algorithm using pulsed infrared thermography technology

    NASA Astrophysics Data System (ADS)

    Tang, Qingju; Dai, Jingmin; Liu, Junyan; Liu, Chunsheng; Liu, Yuanlin; Ren, Chunping

    2016-07-01

    Quantitative detection of debonding defects' diameter and depth in TBCs has been carried out using pulsed infrared thermography technology. By combining principal component analysis with neural network theory, the Markov-PCA-BP algorithm was proposed. The principle and realization process of the proposed algorithm was described. In the prediction model, the principal components which can reflect most characteristics of the thermal wave signal were set as the input, and the defect depth and diameter was set as the output. The experimental data from pulsed infrared thermography tests of TBCs with flat bottom hole defects was selected as the training and testing sample. Markov-PCA-BP predictive system was arrived, based on which both the defect depth and diameter were identified accurately, which proved the effectiveness of the proposed method for quantitative detection of debonding defects in TBCs.

  15. A new algorithm for epilepsy seizure onset detection and spread estimation from EEG signals

    NASA Astrophysics Data System (ADS)

    Quintero-Rincón, Antonio; Pereyra, Marcelo; D'Giano, Carlos; Batatia, Hadj; Risk, Marcelo

    2016-04-01

    Appropriate diagnosis and treatment of epilepsy is a main public health issue. Patients suffering from this disease often exhibit different physical characterizations, which result from the synchronous and excessive discharge of a group of neurons in the cerebral cortex. Extracting this information using EEG signals is an important problem in biomedical signal processing. In this work we propose a new algorithm for seizure onset detection and spread estimation in epilepsy patients. The algorithm is based on a multilevel 1-D wavelet decomposition that captures the physiological brain frequency signals coupled with a generalized gaussian model. Preliminary experiments with signals from 30 epilepsy crisis and 11 subjects, suggest that the proposed methodology is a powerful tool for detecting the onset of epilepsy seizures with his spread across the brain.

  16. Design and evaluation of hyperspectral algorithms for chemical warfare agent detection

    NASA Astrophysics Data System (ADS)

    Manolakis, Dimitris; D'Amico, Francis M.

    2005-11-01

    Remote sensing of chemical warfare agents (CWA) with stand-off hyperspectral imaging sensors has a wide range of civilian and military applications. These sensors exploit the spectral changes in the ambient photon flux produced by either sunlight or the thermal emission of the earth after passage through a region containing the CWA cloud. The purpose of this paper is threefold. First, to discuss a simple phenomenological model for the radiance measured by the sensor in the case of optically thin clouds. This model provides the mathematical framework for the development of optimum algorithms and their analytical evaluation. Second, we identify the fundamental aspects of the data exploitation problem and we develop detection algorithms that can be used by different sensors as long as they can provide the required measurements. Finally, we discuss performance metrics for detection, identification, and quantification and we investigate their dependance on CWA spectral signatures, sensor noise, and background spectral variability.

  17. Runway Safety Monitor Algorithm for Single and Crossing Runway Incursion Detection and Alerting

    NASA Technical Reports Server (NTRS)

    Green, David F., Jr.

    2006-01-01

    The Runway Safety Monitor (RSM) is an aircraft based algorithm for runway incursion detection and alerting that was developed in support of NASA's Runway Incursion Prevention System (RIPS) research conducted under the NASA Aviation Safety and Security Program's Synthetic Vision System project. The RSM algorithm provides warnings of runway incursions in sufficient time for pilots to take evasive action and avoid accidents during landings, takeoffs or when taxiing on the runway. The report documents the RSM software and describes in detail how RSM performs runway incursion detection and alerting functions for NASA RIPS. The report also describes the RIPS flight tests conducted at the Reno/Tahoe International Airport (RNO) and the Wallops Flight Facility (WAL) during July and August of 2004, and the RSM performance results and lessons learned from those flight tests.

  18. Detection of unusual trajectories using multi-objective evolutionary algorithms and rough sets

    NASA Astrophysics Data System (ADS)

    Smolinski, Tomasz G.; Newell, Trevor; McDaniel, Samantha; Pokrajac, David

    2013-09-01

    Detection of unusual trajectories of moving objects (e.g., people, automobiles, etc.) is an important problem in many civilian and military surveillance applications. In this work, we propose a multi-objective evolutionary algorithms and rough sets-based approach that breaks down 2-dimensional trajectories into a set of additive components, which then can be used to build a classifier capable of recognizing typical, but yet unseen trajectories, and identifying those that seem suspicious.

  19. A lake detection algorithm (LDA) using Landsat 8 data: A comparative approach in glacial environment

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Anshuman; Singh, Mritunjay Kumar; Joshi, P. K.; Snehmani; Singh, Shaktiman; Sam, Lydia; Gupta, R. D.; Kumar, Rajesh

    2015-06-01

    Glacial lakes show a wide range of turbidity. Owing to this, the normalized difference water indices (NDWIs) as proposed by many researchers, do not give appropriate results in case of glacial lakes. In addition, the sub-pixel proportion of water and use of different optical band combinations are also reported to produce varying results. In the wake of the changing climate and increasing GLOFs (glacial lake outburst floods), there is a need to utilize wide optical and thermal capabilities of Landsat 8 data for the automated detection of glacial lakes. In the present study, the optical and thermal bandwidths of Landsat 8 data were explored along with the terrain slope parameter derived from Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model Version2 (ASTER GDEM V2), for detecting and mapping glacial lakes. The validation of the algorithm was performed using manually digitized and subsequently field corrected lake boundaries. The pre-existing NDWIs were also evaluated to determine the supremacy and the stability of the proposed algorithm for glacial lake detection. Two new parameters, LDI (lake detection index) and LF (lake fraction) were proposed to comment on the performances of the indices. The lake detection algorithm (LDA) performed best in case of both, mixed lake pixels and pure lake pixels with no false detections (LDI = 0.98) and very less areal underestimation (LF = 0.73). The coefficient of determination (R2) between areal extents of lake pixels, extracted using the LDA and the actual lake area, was very high (0.99). With understanding of the terrain conditions and slight threshold adjustments, this work can be replicated for any mountainous region of the world.

  20. An automated algorithm for online detection of fragmented QRS and identification of its various morphologies

    PubMed Central

    Maheshwari, Sidharth; Acharyya, Amit; Puddu, Paolo Emilio; Mazomenos, Evangelos B.; Leekha, Gourav; Maharatna, Koushik; Schiariti, Michele

    2013-01-01

    Fragmented QRS (f-QRS) has been proven to be an efficient biomarker for several diseases, including remote and acute myocardial infarction, cardiac sarcoidosis, non-ischaemic cardiomyopathy, etc. It has also been shown to have higher sensitivity and/or specificity values than the conventional markers (e.g. Q-wave, ST-elevation, etc.) which may even regress or disappear with time. Patients with such diseases have to undergo expensive and sometimes invasive tests for diagnosis. Automated detection of f-QRS followed by identification of its various morphologies in addition to the conventional ECG feature (e.g. P, QRS, T amplitude and duration, etc.) extraction will lead to a more reliable diagnosis, therapy and disease prognosis than the state-of-the-art approaches and thereby will be of significant clinical importance for both hospital-based and emerging remote health monitoring environments as well as for implanted ICD devices. An automated algorithm for detection of f-QRS from the ECG and identification of its various morphologies is proposed in this work which, to the best of our knowledge, is the first work of its kind. Using our recently proposed time–domain morphology and gradient-based ECG feature extraction algorithm, the QRS complex is extracted and discrete wavelet transform (DWT) with one level of decomposition, using the ‘Haar’ wavelet, is applied on it to detect the presence of fragmentation. Detailed DWT coefficients were observed to hypothesize the postulates of detection of all types of morphologies as reported in the literature. To model and verify the algorithm, PhysioNet's PTB database was used. Forty patients were randomly selected from the database and their ECG were examined by two experienced cardiologists and the results were compared with those obtained from the algorithm. Out of 40 patients, 31 were considered appropriate for comparison by two cardiologists, and it is shown that 334 out of 372 (89.8%) leads from the chosen 31 patients

  1. Dramatyping: a generic algorithm for detecting reasonable temporal correlations between drug administration and lab value alterations

    PubMed Central

    2016-01-01

    According to the World Health Organization, one of the criteria for the standardized assessment of case causality in adverse drug reactions is the temporal relationship between the intake of a drug and the occurrence of a reaction or a laboratory test abnormality. This article presents and describes an algorithm for the detection of a reasonable temporal correlation between the administration of a drug and the alteration of a laboratory value course. The algorithm is designed to process normalized lab values and is therefore universally applicable. It has a sensitivity of 0.932 for the detection of lab value courses that show changes in temporal correlation with the administration of a drug and it has a specificity of 0.967 for the detection of lab value courses that show no changes. Therefore, the algorithm is appropriate to screen the data of electronic health records and to support human experts in revealing adverse drug reactions. A reference implementation in Python programming language is available. PMID:27042396

  2. Dramatyping: a generic algorithm for detecting reasonable temporal correlations between drug administration and lab value alterations.

    PubMed

    Newe, Axel

    2016-01-01

    According to the World Health Organization, one of the criteria for the standardized assessment of case causality in adverse drug reactions is the temporal relationship between the intake of a drug and the occurrence of a reaction or a laboratory test abnormality. This article presents and describes an algorithm for the detection of a reasonable temporal correlation between the administration of a drug and the alteration of a laboratory value course. The algorithm is designed to process normalized lab values and is therefore universally applicable. It has a sensitivity of 0.932 for the detection of lab value courses that show changes in temporal correlation with the administration of a drug and it has a specificity of 0.967 for the detection of lab value courses that show no changes. Therefore, the algorithm is appropriate to screen the data of electronic health records and to support human experts in revealing adverse drug reactions. A reference implementation in Python programming language is available. PMID:27042396

  3. Dramatyping: a generic algorithm for detecting reasonable temporal correlations between drug administration and lab value alterations.

    PubMed

    Newe, Axel

    2016-01-01

    According to the World Health Organization, one of the criteria for the standardized assessment of case causality in adverse drug reactions is the temporal relationship between the intake of a drug and the occurrence of a reaction or a laboratory test abnormality. This article presents and describes an algorithm for the detection of a reasonable temporal correlation between the administration of a drug and the alteration of a laboratory value course. The algorithm is designed to process normalized lab values and is therefore universally applicable. It has a sensitivity of 0.932 for the detection of lab value courses that show changes in temporal correlation with the administration of a drug and it has a specificity of 0.967 for the detection of lab value courses that show no changes. Therefore, the algorithm is appropriate to screen the data of electronic health records and to support human experts in revealing adverse drug reactions. A reference implementation in Python programming language is available.

  4. Study of Host-Based Cyber Attack Precursor Symptom Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Song, Jae-Gu; Kim, Jong Hyun; Seo, Dongil; Soh, Wooyoung; Kim, Seoksoo

    Botnet-based cyber attacks cause large-scale damage with increasingly intelligent tools, which has called for varied research on bot detection. In this study, we developed a method of monitoring behaviors of host-based processes from the point that a bot header attempts to make zombie PCs, detecting cyber attack precursor symptoms. We designed an algorithm that figures out characteristics of botnet which attempts to launch malicious behaviors by means of signature registration, which is for process/reputation/network traffic/packet/source analysis and a white list, as a measure to respond to bots from the end point.

  5. Syndromic Algorithms for Detection of Gambiense Human African Trypanosomiasis in South Sudan

    PubMed Central

    Palmer, Jennifer J.; Surur, Elizeous I.; Goch, Garang W.; Mayen, Mangar A.; Lindner, Andreas K.; Pittet, Anne; Kasparian, Serena; Checchi, Francesco; Whitty, Christopher J. M.

    2013-01-01

    Background Active screening by mobile teams is considered the best method for detecting human African trypanosomiasis (HAT) caused by Trypanosoma brucei gambiense but the current funding context in many post-conflict countries limits this approach. As an alternative, non-specialist health care workers (HCWs) in peripheral health facilities could be trained to identify potential cases who need testing based on their symptoms. We explored the predictive value of syndromic referral algorithms to identify symptomatic cases of HAT among a treatment-seeking population in Nimule, South Sudan. Methodology/Principal Findings Symptom data from 462 patients (27 cases) presenting for a HAT test via passive screening over a 7 month period were collected to construct and evaluate over 14,000 four item syndromic algorithms considered simple enough to be used by peripheral HCWs. For comparison, algorithms developed in other settings were also tested on our data, and a panel of expert HAT clinicians were asked to make referral decisions based on the symptom dataset. The best performing algorithms consisted of three core symptoms (sleep problems, neurological problems and weight loss), with or without a history of oedema, cervical adenopathy or proximity to livestock. They had a sensitivity of 88.9–92.6%, a negative predictive value of up to 98.8% and a positive predictive value in this context of 8.4–8.7%. In terms of sensitivity, these out-performed more complex algorithms identified in other studies, as well as the expert panel. The best-performing algorithm is predicted to identify about 9/10 treatment-seeking HAT cases, though only 1/10 patients referred would test positive. Conclusions/Significance In the absence of regular active screening, improving referrals of HAT patients through other means is essential. Systematic use of syndromic algorithms by peripheral HCWs has the potential to increase case detection and would increase their participation in HAT programmes. The

  6. Trace explosives detection using photo-thermal infrared imaging spectroscopy (PT-IRIS): theory, modeling, and detection algorithms

    NASA Astrophysics Data System (ADS)

    Furstenberg, Robert; Kendziora, Christopher A.; Papantonakis, Michael R.; Nguyen, Viet; Byers, Jeff; McGill, R. Andrew

    2015-05-01

    We are developing a technology for stand-off detection based on photo-thermal infrared imaging spectroscopy (PT-IRIS). In this approach, one or more infrared (IR) quantum cascade lasers are tuned to strong absorption bands in the analytes and directed at the sample while an IR focal plane array is used to image the subsequent thermal emissions. In this paper we present recent advances in the theory and numerical modeling of photo-thermal imaging and spectroscopy of particulates on flat substrates. We compare the theoretical models with experimental data taken on our mobile cart-based PT-IRIS system. Synthetic data of the photo-thermal response was calculated for a wide range of analytes, substrates, particle sizes, and analyte mass loadings using their known thermo-physical and optical properties. These synthetic data sets can now be generated quickly and were used to accelerate the development of detection algorithms. The performance of detection algorithms will also be discussed.

  7. Target detection in diagnostic ultrasound: Evaluation of a method based on the CLEAN algorithm.

    PubMed

    Masoom, Hassan; Adve, Raviraj S; Cobbold, Richard S C

    2013-02-01

    A technique is proposed for the detection of abnormalities (targets) in ultrasound images using little or no a priori information and requiring little operator intervention. The scheme is a combination of the CLEAN algorithm, originally proposed for radio astronomy, and constant false alarm rate (CFAR) processing, as developed for use in radar systems. The CLEAN algorithm identifies areas in the ultrasound image that stand out above a threshold in relation to the background; CFAR techniques allow for an adaptive, semi-automated, selection of the threshold. Neither appears to have been previously used for target detection in ultrasound images and never together in any context. As a first step towards assessing the potential of this method we used a widely used method of simulating B-mode images (Field II). We assumed the use of a 256 element linear array operating at 3.0MHz into a water-like medium containing a density of point scatterers sufficient to simulate a background of fully developed speckle. Spherical targets with diameters ranging from 0.25 to 6.0mm and contrasts ranging from 0 to 12dB relative to the background were used as test objects. Using a contrast-detail analysis, the probability of detection curves indicate these targets can be consistently detected within a speckle background. Our results indicate that the method has considerable promise for the semi-automated detection of abnormalities with diameters greater than a few millimeters, depending on the contrast.

  8. Hyperspectral data collection for the assessment of target detection algorithms: the Viareggio 2013 trial

    NASA Astrophysics Data System (ADS)

    Rossi, Alessandro; Acito, Nicola; Diani, Marco; Corsini, Giovanni; De Ceglie, Sergio Ugo; Riccobono, Aldo; Chiarantini, Leandro

    2014-10-01

    Airborne hyperspectral imagery is valuable for military and civilian applications, such as target identification, detection of anomalies and changes within multiple acquisitions. In target detection (TD) applications, the performance assessment of different algorithms is an important and critical issue. In this context, the small number of public available hyperspectral data motivated us to perform an extensive measurement campaign including various operating scenarios. The campaign was organized by CISAM in cooperation with University of Pisa, Selex ES and CSSN-ITE, and it was conducted in Viareggio, Italy in May, 2013. The Selex ES airborne hyperspectral sensor SIM.GA was mounted on board of an airplane to collect images over different sites in the morning and afternoon of two subsequent days. This paper describes the hyperspectral data collection of the trial. Four different sites were set up, representing a complex urban scenario, two parking lots and a rural area. Targets with dimensions comparable to the sensor ground resolution were deployed in the sites to reproduce different operating situations. An extensive ground truth documentation completes the data collection. Experiments to test anomalous change detection techniques were set up changing the position of the deployed targets. Search and rescue scenarios were simulated to evaluate the performance of anomaly detection algorithms. Moreover, the reflectance signatures of the targets were measured on the ground to perform spectral matching in varying atmospheric and illumination conditions. The paper presents some preliminary results that show the effectiveness of hyperspectral data exploitation for the object detection tasks of interest in this work.

  9. A Multi-Scale Algorithm for Graffito Advertisement Detection from Images of Real Estate

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Zhu, Shi-Jiao

    There is a significant need to detect and extract the graffito advertisement embedded in the housing images automatically. However, it is a hard job to separate the advertisement region well since housing images generally have complex background. In this paper, a detecting algorithm which uses multi-scale Gabor filters to identify graffito regions is proposed. Firstly, multi-scale Gabor filters with different directions are applied to housing images, then the approach uses these frequency data to find likely graffito regions using the relationship of different channels, it exploits the ability of different filters technique to solve the detection problem with low computational efforts. Lastly, the method is tested on several real estate images which are embedded graffito advertisement to verify its robustness and efficiency. The experiments demonstrate graffito regions can be detected quite well.

  10. A New MANET wormhole detection algorithm based on traversal time and hop count analysis.

    PubMed

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2011-01-01

    As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET) are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA), which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability. PMID:22247657

  11. A New MANET Wormhole Detection Algorithm Based on Traversal Time and Hop Count Analysis

    PubMed Central

    Karlsson, Jonny; Dooley, Laurence S.; Pulkkis, Göran

    2011-01-01

    As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET) are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA), which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability. PMID:22247657

  12. A New MANET wormhole detection algorithm based on traversal time and hop count analysis.

    PubMed

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2011-01-01

    As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET) are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA), which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability.

  13. Design of Cyber Attack Precursor Symptom Detection Algorithm through System Base Behavior Analysis and Memory Monitoring

    NASA Astrophysics Data System (ADS)

    Jung, Sungmo; Kim, Jong Hyun; Cagalaban, Giovanni; Lim, Ji-Hoon; Kim, Seoksoo

    More recently, botnet-based cyber attacks, including a spam mail or a DDos attack, have sharply increased, which poses a fatal threat to Internet services. At present, antivirus businesses make it top priority to detect malicious code in the shortest time possible (Lv.2), based on the graph showing a relation between spread of malicious code and time, which allows them to detect after malicious code occurs. Despite early detection, however, it is not possible to prevent malicious code from occurring. Thus, we have developed an algorithm that can detect precursor symptoms at Lv.1 to prevent a cyber attack using an evasion method of 'an executing environment aware attack' by analyzing system behaviors and monitoring memory.

  14. Evaluating coverage changes in national parks using a hybrid change detection algorithm and remote sensing

    NASA Astrophysics Data System (ADS)

    Ghofrani, Zahra; Mokhtarzade, Mehdi; Reza Sahebi, Mahmod; Beykikhoshk, Adham

    2014-01-01

    Remote sensing is a useful tool for detecting change over time. We introduce a hybrid change-detection method for forest and protected-area vegetation and demonstrate its use with two satellite images of Golestan National Park in northern Iran (1998 and 2010). We report on the advantages and disadvantages of the hybrid method relative to the standard change-detection method. In the proposed hybrid algorithm, the change vector analysis technique was used to determine changes in vegetation. Following this, we used postclassification comparison to determine the nature of the changes observed and their accuracy and to evaluate the effects of different parameters on the performance of the proposed method. We determined 85% accuracy for the proposed hybrid change-detection method, thus demonstrating a method for discovering and assessing environmental threats to natural treasures.

  15. Enhancing nuclear quadrupole resonance (NQR) signature detection leveraging interference suppression algorithms

    NASA Astrophysics Data System (ADS)

    DeBardelaben, James A.; Miller, Jeremy K.; Myrick, Wilbur L.; Miller, Joel B.; Gilbreath, G. Charmaine; Bajramaj, Blerta

    2012-06-01

    Nuclear quadrupole resonance (NQR) is a radio frequency (RF) magnetic spectroscopic technique that has been shown to detect and identify a wide range of explosive materials containing quadrupolar nuclei. The NQR response signal provides a unique signature of the material of interest. The signal is, however, very weak and can be masked by non-stationary RF interference (RFI) and thermal noise, limiting detection distance. In this paper, we investigate the bounds on the NQR detection range for ammonium nitrate. We leverage a low-cost RFI data acquisition system composed of inexpensive B-field sensing and commercial-off-the-shelf (COTS) software-defined radios (SDR). Using collected data as RFI reference signals, we apply adaptive filtering algorithms to mitigate RFI and enable NQR detection techniques to approach theoretical range bounds in tactical environments.

  16. Improvement for detection of microcalcifications through clustering algorithms and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Quintanilla-Domínguez, Joel; Ojeda-Magaña, Benjamín; Marcano-Cedeño, Alexis; Cortina-Januchs, María G.; Vega-Corona, Antonio; Andina, Diego

    2011-12-01

    A new method for detecting microcalcifications in regions of interest (ROIs) extracted from digitized mammograms is proposed. The top-hat transform is a technique based on mathematical morphology operations and, in this paper, is used to perform contrast enhancement of the mi-crocalcifications. To improve microcalcification detection, a novel image sub-segmentation approach based on the possibilistic fuzzy c-means algorithm is used. From the original ROIs, window-based features, such as the mean and standard deviation, were extracted; these features were used as an input vector in a classifier. The classifier is based on an artificial neural network to identify patterns belonging to microcalcifications and healthy tissue. Our results show that the proposed method is a good alternative for automatically detecting microcalcifications, because this stage is an important part of early breast cancer detection.

  17. An Improved Topology-Potential-Based Community Detection Algorithm for Complex Network

    PubMed Central

    Wang, Zhixiao; Zhao, Ya; Chen, Zhaotong; Niu, Qiang

    2014-01-01

    Topology potential theory is a new community detection theory on complex network, which divides a network into communities by spreading outward from each local maximum potential node. At present, almost all topology-potential-based community detection methods ignore node difference and assume that all nodes have the same mass. This hypothesis leads to inaccuracy of topology potential calculation and then decreases the precision of community detection. Inspired by the idea of PageRank algorithm, this paper puts forward a novel mass calculation method for complex network nodes. A node's mass obtained by our method can effectively reflect its importance and influence in complex network. The more important the node is, the bigger its mass is. Simulation experiment results showed that, after taking node mass into consideration, the topology potential of node is more accurate, the distribution of topology potential is more reasonable, and the results of community detection are more precise. PMID:24600319

  18. Comparison of algorithms for blood stain detection applied to forensic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Messinger, David W.; Mathew, Jobin J.; Dube, Roger R.

    2016-05-01

    Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as "latent" blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.

  19. DynPeak: An Algorithm for Pulse Detection and Frequency Analysis in Hormonal Time Series

    PubMed Central

    Vidal, Alexandre; Zhang, Qinghua; Médigue, Claire; Fabre, Stéphane; Clément, Frédérique

    2012-01-01

    The endocrine control of the reproductive function is often studied from the analysis of luteinizing hormone (LH) pulsatile secretion by the pituitary gland. Whereas measurements in the cavernous sinus cumulate anatomical and technical difficulties, LH levels can be easily assessed from jugular blood. However, plasma levels result from a convolution process due to clearance effects when LH enters the general circulation. Simultaneous measurements comparing LH levels in the cavernous sinus and jugular blood have revealed clear differences in the pulse shape, the amplitude and the baseline. Besides, experimental sampling occurs at a relatively low frequency (typically every 10 min) with respect to LH highest frequency release (one pulse per hour) and the resulting LH measurements are noised by both experimental and assay errors. As a result, the pattern of plasma LH may be not so clearly pulsatile. Yet, reliable information on the InterPulse Intervals (IPI) is a prerequisite to study precisely the steroid feedback exerted on the pituitary level. Hence, there is a real need for robust IPI detection algorithms. In this article, we present an algorithm for the monitoring of LH pulse frequency, basing ourselves both on the available endocrinological knowledge on LH pulse (shape and duration with respect to the frequency regime) and synthetic LH data generated by a simple model. We make use of synthetic data to make clear some basic notions underlying our algorithmic choices. We focus on explaining how the process of sampling affects drastically the original pattern of secretion, and especially the amplitude of the detectable pulses. We then describe the algorithm in details and perform it on different sets of both synthetic and experimental LH time series. We further comment on how to diagnose possible outliers from the series of IPIs which is the main output of the algorithm. PMID:22802933

  20. Breadth-First Search-Based Single-Phase Algorithms for Bridge Detection in Wireless Sensor Networks

    PubMed Central

    Akram, Vahid Khalilpour; Dagdeviren, Orhan

    2013-01-01

    Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930

  1. Algorithm for detecting defects in wooden logs using ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Devaru, Dayakar; Halabe, Udaya B.; Gopalakrishnan, B.; Agrawal, Sachin; Grushecky, Shawn

    2005-11-01

    Presently there are no suitable non-invasive methods for precisely detecting the subsurface defects in logs in real time. Internal defects such as knots, decays, and embedded metals are of greatest concern for lumber production. While defects such as knots and decays (rots) are of major concern related to productivity and yield of high value wood products, embedded metals can damage the saw blade and significantly increase the down time and maintenance costs of saw mills. Currently, a large number of logs end up being discarded by saw mills, or result in low value wood products since they include defects. Nondestructive scanning of logs using techniques such as Ground Penetrating Radar (GPR) prior to sawing can greatly increase the productivity and yield of high value lumber. In this research, the GPR scanned data has been analyzed to differentiate the defective part of the wooden log from the good part. The location and size of the defect has been found in the GPR scanned data using the MATLAB algorithm. The output of this algorithm can be used as an input for generating operating instructions for a CNC sawing machine. This paper explains the advantages of the GPR technique, experimental setup and parameters used, data processing using RADAN software for detection of subsurface defects in logs, GPR data processing and analysis using MATLAB algorithm for automated defect detection, and comparison of results between the two processing methods. The results show that GPR in conjunction with the proposed algorithm provides a very promising technique for future on-line implementation in saw mills.

  2. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

    1988-01-01

    Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

  3. A novel seizure detection algorithm informed by hidden Markov model event states

    NASA Astrophysics Data System (ADS)

    Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian

    2016-06-01

    Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h‑1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.

  4. A novel seizure detection algorithm informed by hidden Markov model event states

    NASA Astrophysics Data System (ADS)

    Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian

    2016-06-01

    Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h-1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.

  5. Infrared dim small target detection algorithm based on NSCT and SVD

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Liu, Gang; Zhou, Huixin; Qin, Hanlin; Li, Xiao; Wen, Zhigang; Ni, Man; Wang, Bingjian

    2015-10-01

    A novel dim small target detection algorithm based on the nonsubsampled contourlet transform (NSCT) and the singular value decomposition (SVD) is proposed in this paper, which is to improve the performance of the dim small target detection under the complex sky cloud background. Firstly, the original infrared image is decomposed with the SVD, and several different numbers of the singular value for reconstruction is chosen to analyze the application of the SVD to the image. The complex sky cloud background in the infrared target image is predicted by choosing a certain number of the singular value to reconstruct the image, and it is subtracted from the original image to suppress the background and enhance the target signal. Secondly, to use the scale and the direction information of the image, the residual image is decomposed by the NSCT into several high-pass directional subbands and a low-pass subband. Thirdly, the SVD filtering is utilized again to those directional subbands to eliminate the noise and the residual background. And the low-pass subband is modified by the local mean removal method. Finally, the refined subbands are reconstructed by the inverse NSCT to fulfill the dim small target detection. The experimental results demonstrate that the proposed algorithm has better subjective vision and objective numerical indicators, and can acquire a better performance of the target detection.

  6. A new parallel algorithm for contact detection in finite element methods

    SciTech Connect

    Hendrickson, B.; Plimpton, S.; Attaway, S.; Vaughan, C.; Gardner, D.

    1996-03-01

    In finite-element, transient dynamics simulations, physical objects are typically modeled as Lagrangian meshes because the meshes can move and deform with the objects as they undergo stress. In many simulations, such as computations of impacts or explosions, portions of the deforming mesh come in contact with each other as the simulation progresses. These contacts must be detected and the forces they impart to the mesh must be computed at each timestep to accurately capture the physics of interest. While the finite-element portion of these computations is readily parallelized, the contact detection problem is difficult to implement efficiently on parallel computers and has been a bottleneck to achieving high performance on large parallel machines. In this paper we describe a new parallel algorithm for detecting contacts. Our approach differs from previous work in that we use two different parallel decompositions, a static one for the finite element analysis and dynamic one for contact detection. We present results for this algorithm in a parallel version of the transient dynamics code PRONTO-3D running on a large Intel Paragon.

  7. An infrared thermal image processing framework based on superpixel algorithm to detect cracks on metal surface

    NASA Astrophysics Data System (ADS)

    Xu, Changhang; Xie, Jing; Chen, Guoming; Huang, Weiping

    2014-11-01

    Infrared thermography has been used increasingly as an effective non-destructive technique to detect cracks on metal surface. Due to many factors, infrared thermal image has low definition compared to visible image. The contrasts between cracks and sound areas in different thermal image frames of a specimen vary greatly with the recorded time. An accurate detection can only be obtained by glancing over the whole thermal video, which is a laborious work. Moreover, experience of the operator has a great important influence on the accuracy of detection result. In this paper, an infrared thermal image processing framework based on superpixel algorithm is proposed to accomplish crack detection automatically. Two popular superpixel algorithms are compared and one of them is selected to generate superpixels in this application. Combined features of superpixels were selected from both the raw gray level image and the high-pass filtered image. Fuzzy c-means clustering is used to cluster superpixels in order to segment infrared thermal image. Experimental results show that the proposed framework can recognize cracks on metal surface through infrared thermal image automatically.

  8. MeV Gamma Ray Detection Algorithms for Stacked Silicon Detectors

    NASA Technical Reports Server (NTRS)

    McMurray, Robert E. Jr.; Hubbard, G. Scott; Wercinski, Paul F.; Keller, Robert G.

    1993-01-01

    By making use of the signature of a gamma ray event as it appears in N = 5 to 20 lithium-drifted silicon detectors and applying smart selection algorithms, gamma rays in the energy range of 1 to 8 MeV can be detected with good efficiency and selectivity. Examples of the types of algorithms used for different energy regions include the simple sum mode, the sum-coincidence mode used in segmented detectors, unique variations on sum-coincidence for an N-dimensional vector event, and a new and extremely useful mode for double escape peak spectroscopy at pair-production energies. The latter algorithm yields a spectrum similar to that of the pair spectrometer, but without the need of the dual external segments for double escape coincidence, and without the large loss in efficiency of double escape events. Background events due to Compton scattering are largely suppressed. Monte Carlo calculations were used to model the gamma ray interactions in the silicon, in order to enable testing of a wide array of different algorithms on the event N-vectors for a large-N stack.

  9. Adaptive Fault Detection on Liquid Propulsion Systems with Virtual Sensors: Algorithms and Architectures

    NASA Technical Reports Server (NTRS)

    Matthews, Bryan L.; Srivastava, Ashok N.

    2010-01-01

    Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.

  10. Development of AN Algorithmic Procedure for the Detection of Conjugate Fragments

    NASA Astrophysics Data System (ADS)

    Filippas, D.; Georgopoulo, A.

    2013-07-01

    The rapid development of Computer Vision has contributed to the widening of the techniques and methods utilized by archaeologists for the digitization and reconstruction of historic objects by automating the matching of fragments, small or large. This paper proposes a novel method for the detection of conjugate fragments, based mainly on their geometry. Subsequently the application of the Fragmatch algorithm is presented, with an extensive analysis of both of its parts; the global and the partial matching of surfaces. The method proposed is based on the comparison of vectors and surfaces, performed linearly, for simplicity and speed. A series of simulations have been performed in order to test the limits of the algorithm for the noise and the accuracy of scanning, for the number of scan points, as well as for the wear of the surfaces and the diversity of shapes. Problems that have been encountered during the application of these examples are interpreted and ways of dealing with them are being proposed. In addition a practical application is presented to test the algorithm in real conditions. Finally, the key points of this work are being mentioned, followed by an analysis of the advantages and disadvantages of the proposed Fragmatch algorithm along with proposals for future work.

  11. Loop closure detection by algorithmic information theory: implemented on range and camera image data.

    PubMed

    Ravari, Alireza Norouzzadeh; Taghirad, Hamid D

    2014-10-01

    In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods.

  12. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.

    PubMed

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan

    2016-03-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.

  13. An hybrid real genetic algorithm to detect structural damage using modal properties

    NASA Astrophysics Data System (ADS)

    Meruane, V.; Heylen, W.

    2011-07-01

    An hybrid real-coded Genetic Algorithm with damage penalization is implemented to locate and quantify structural damage. Genetic Algorithms provide a powerful tool to solved optimization problems. With an appropriate selection of their operators and parameters they can potentially explore the entire solution space and reach the global optimum. Here, the set-up of the Genetic Algorithm operators and parameters is addressed, providing guidelines to their selection in similar damage detection problems. The performance of five fundamental functions based on modal data is studied. In addition, this paper proposes the use of a damage penalization that satisfactorily avoids false damage detection due to experimental noise or numerical errors. A tridimensional space frame structure with single and multiple damages scenarios provides an experimental framework which verifies the approach. The method is tested with different levels of incompleteness in the measured degrees of freedom. The results show that this approach reaches a much more precise solution than conventional optimization methods. A scenario of three simultaneous damage locations was correctly located and quantified by measuring only a 6.3% of the total degrees of freedom.

  14. Evaluation of a Pair-Wise Conflict Detection and Resolution Algorithm in a Multiple Aircraft Scenario

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.

    2002-01-01

    The KB3D algorithm is a pairwise conflict detection and resolution (CD&R) algorithm. It detects and generates trajectory vectoring for an aircraft which has been predicted to be in an airspace minima violation within a given look-ahead time. It has been proven, using mechanized theorem proving techniques, that for a pair of aircraft, KB3D produces at least one vectoring solution and that all solutions produced are correct. Although solutions produced by the algorithm are mathematically correct, they might not be physically executable by an aircraft or might not solve multiple aircraft conflicts. This paper describes a simple solution selection method which assesses all solutions generated by KB3D and determines the solution to be executed. The solution selection method and KB3D are evaluated using a simulation in which N aircraft fly in a free-flight environment and each aircraft in the simulation uses KB3D to maintain separation. Specifically, the solution selection method filters KB3D solutions which are procedurally undesirable or physically not executable and uses a predetermined criteria for selection.

  15. Loop closure detection by algorithmic information theory: implemented on range and camera image data.

    PubMed

    Ravari, Alireza Norouzzadeh; Taghirad, Hamid D

    2014-10-01

    In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods. PMID:24968363

  16. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.

    PubMed

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  17. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    PubMed Central

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  18. A Universal Dynamic Threshold Cloud Detection Algorithm (UDTCDA) supported by a prior surface reflectance database

    NASA Astrophysics Data System (ADS)

    Sun, Lin; Wei, Jing; Wang, Jian; Mi, Xueting; Guo, Yamin; Lv, Yang; Yang, Yikun; Gan, Ping; Zhou, Xueying; Jia, Chen; Tian, Xinpeng

    2016-06-01

    Conventional cloud detection methods are easily affected by mixed pixels, complex surface structures, and atmospheric factors, resulting in poor cloud detection results. To minimize these problems, a new Universal Dynamic Threshold Cloud Detection Algorithm (UDTCDA) supported by a priori surface reflectance database is proposed in this paper. A monthly surface reflectance database is constructed using long-time-sequenced MODerate resolution Imaging Spectroradiometer surface reflectance product (MOD09A1) to provide the surface reflectance of the underlying surfaces. The relationships between the apparent reflectance changes and the surface reflectance are simulated under different observation and atmospheric conditions with the 6S (Second Simulation of the Satellite Signal in the Solar Spectrum) model, and the dynamic threshold cloud detection models are developed. Two typical remote sensing data with important application significance and different sensor parameters, MODIS and Landsat 8, are selected for cloud detection experiments. The results were validated against the visual interpretation of clouds and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation cloud measurements. The results showed that the UDTCDA can obtain a high precision in cloud detection, correctly identifying cloudy pixels and clear-sky pixels at rates greater than 80% with error rate and missing rate of less than 20%. The UDTCDA cloud product overall shows less estimation uncertainty than the current MODIS cloud mask products. Moreover, the UDTCDA can effectively reduce the effects of atmospheric factors and mixed pixels and can be applied to different satellite sensors to realize long-term, large-scale cloud detection operations.

  19. An efficient technique for nuclei segmentation based on ellipse descriptor analysis and improved seed detection algorithm.

    PubMed

    Xu, Hongming; Lu, Cheng; Mandal, Mrinal

    2014-09-01

    In this paper, we propose an efficient method for segmenting cell nuclei in the skin histopathological images. The proposed technique consists of four modules. First, it separates the nuclei regions from the background with an adaptive threshold technique. Next, an elliptical descriptor is used to detect the isolated nuclei with elliptical shapes. This descriptor classifies the nuclei regions based on two ellipticity parameters. Nuclei clumps and nuclei with irregular shapes are then localized by an improved seed detection technique based on voting in the eroded nuclei regions. Finally, undivided nuclei regions are segmented by a marked watershed algorithm. Experimental results on 114 different image patches indicate that the proposed technique provides a superior performance in nuclei detection and segmentation.

  20. High effective algorithm of the detection and identification of substance using the noisy reflected THz pulse

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Trofimov, Vladislav V.; Tikhomirov, Vasily V.

    2015-08-01

    Principal limitations of the standard THz-TDS method for the detection and identification are demonstrated under real conditions (at long distance of about 3.5 m and at a high relative humidity more than 50%) using neutral substances thick paper bag, paper napkins and chocolate. We show also that the THz-TDS method detects spectral features of dangerous substances even if the THz signals were measured in laboratory conditions (at distance 30-40 cm from the receiver and at a low relative humidity less than 2%); silicon-based semiconductors were used as the samples. However, the integral correlation criteria, based on SDA method, allows us to detect the absence of dangerous substances in the neutral substances. The discussed algorithm shows high probability of the substance identification and a reliability of realization in practice, especially for security applications and non-destructive testing.

  1. Detecting Blending End-Point Using Mean Squares Successive Difference Test and Near-Infrared Spectroscopy.

    PubMed

    Khorasani, Milad; Amigo, José M; Bertelsen, Poul; Van Den Berg, Frans; Rantanen, Jukka

    2015-08-01

    An algorithm based on mean squares successive difference test applied to near-infrared and principal component analysis scores was developed to monitor and determine the blending profile and to assess the end-point in the statistical stabile phase. Model formulations consisting of an active compound (acetylsalicylic acid), together with microcrystalline cellulose and two grades of calcium carbonate with dramatically different particle shapes, were prepared. The formulation comprising angular-shaped calcium carbonate reached blending end-point slower when compared with the formulation comprising equant-shaped calcium carbonate. Utilizing the ring shear test, this distinction in end-point could be related to the difference in flowability of the formulations. On the basis of the two model formulations, a design of experiments was conducted to characterize the blending process by studying the effect of CaCO3 grades and fill level of the bin on blending end-point. Calcium carbonate grades, fill level, and their interaction were shown to have a significant impact on the blending process. PMID:26094601

  2. An ensemble of k-nearest neighbours algorithm for detection of Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Gök, Murat

    2015-04-01

    Parkinson's disease is a disease of the central nervous system that leads to severe difficulties in motor functions. Developing computational tools for recognition of Parkinson's disease at the early stages is very desirable for alleviating the symptoms. In this paper, we developed a discriminative model based on a selected feature subset and applied several classifier algorithms in the context of disease detection. All classifier performances from the point of both stand-alone and rotation-forest ensemble approach were evaluated on a Parkinson's disease data-set according to a blind testing protocol. The new method compared to hitherto methods outperforms the state-of-the-art in terms of both predictions of accuracy (98.46%) and area under receiver operating characteristic curve (0.99) scores applying rotation-forest ensemble k-nearest neighbour classifier algorithm.

  3. Using the sequential regression (SER) algorithm for long-term signal processing. [Intrusion detection

    SciTech Connect

    Soldan, D. L.; Ahmed, N.; Stearns, S. D.

    1980-01-01

    The use of the sequential regression (SER) algorithm (Electron. Lett., 14, 118(1978); 13, 446(1977)) for long-term processing applications is limited by two problems that can occur when an SER predictor has more weights than required to predict the input signal. First, computational difficulties related to updating the autocorrelation matrix inverse could arise, since no unique least-squares solution exists. Second, the predictor strives to remove very low-level components in the input, and hence could implement a gain function that is essentially zero over the entire passband. The predictor would then tend to become a no-pass filter which is undesirable in certain applications, e.g., intrusion detection (SAND--78-1032). Modifications to the SER algorithm that overcome the above problems are presented, which enable its use for long-term signal processing applications. 3 figures.

  4. Change Detection from differential airborne LiDAR using a weighted Anisotropic Iterative Closest Point Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Kusari, A.; Glennie, C. L.; Oskin, M. E.; Hinojosa-Corona, A.; Borsa, A. A.; Arrowsmith, R.

    2013-12-01

    Differential LiDAR (Light Detection and Ranging) from repeated surveys has recently emerged as an effective tool to measure three-dimensional (3D) change for applications such as quantifying slip and spatially distributed warping associated with earthquake ruptures, and examining the spatial distribution of beach erosion after hurricane impact. Currently, the primary method for determining 3D change is through the use of the iterative closest point (ICP) algorithm and its variants. However, all current studies using ICP have assumed that all LiDAR points in the compared point clouds have uniform accuracy. This assumption is simplistic given that the error for each LiDAR point is variable, and dependent upon highly variable factors such as target range, angle of incidence, and aircraft trajectory accuracy. Therefore, to rigorously determine spatial change, it would be ideal to model the random error for every LiDAR observation in the differential point cloud, and use these error estimates as apriori weights in the ICP algorithm. To test this approach, we implemented a rigorous LiDAR observation error propagation method to generate estimated random error for each point in a LiDAR point cloud, and then determine 3D displacements between two point clouds using an anistropic weighted ICP algorithm. The algorithm was evaluated by qualitatively and quantitatively comparing post earthquake slip estimates from the 2010 El Mayor-Cucapah Earthquake between a uniform weight and anistropically weighted ICP algorithm, using pre-event LiDAR collected in 2006 by Instituto Nacional de Estadística y Geografía (INEGI), and post-event LiDAR collected by The National Center for Airborne Laser Mapping (NCALM).

  5. The development of line-scan image recognition algorithms for the detection of frass on mature tomatoes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this research, a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at two wavebands, 664 nm and 690 nm, for co...

  6. Analysis of BWR OPRM plant data and detection algorithms with DSSPP

    SciTech Connect

    Yang, J.; Vedovi, J.; Chung, A. K.; Zino, J. F.

    2012-07-01

    All U.S. BWRs are required to have licensed stability solutions that satisfy General Design Criteria (GDC) 10 and 12 of 10 CFR 50 Appendix A. Implemented solutions are either detect and suppress or preventive in nature. Detection and suppression of power oscillations is accomplished by specialized hardware and software such as the Oscillation Power Range Monitor (OPRM) utilized in Option III and Detect and Suppress Solution - Confirmation Density (DSS-CD) stability Long-Term Solutions (LTSs). The detection algorithms are designed to recognize a Thermal-Hydraulic Instability (THI) event and initiate control rod insertion before the power oscillations increase much higher above the noise level that may threaten the fuel integrity. Option III is the most widely used long-term stability solution in the US and has more than 200 reactor years of operational history. DSS-CD represents an evolutionary step from the stability LTS Option III and its licensed domain envelopes the Maximum Extended Load Line Limit Analysis Plus (MELLLA +) domain. In order to enhance the capability to investigate the sensitivity of key parameters of stability detection algorithms, GEH has developed a new engineering analysis code, namely DSSPP (Detect and Suppress Solution Post Processor), which is introduced in this paper. The DSSPP analysis tool represents a major advancement in the method for diagnosing the design of stability detection algorithms that enables designers to perform parametric studies of the key parameters relevant for THI events and to fine tune these system parameters such that a potential spurious scram might be avoided. Demonstrations of DSSPPs application are also presented in this paper utilizing actual plant THI data. A BWR/6 plant had a plant transient that included unplanned recirculation pump transfer from fast to slow speed resulting in about 100% to {approx}40% rated power decrease and about 99% to {approx}30% rated core flow decrease. As the feedwater temperature

  7. Mapping of Planetary Surface Age Based on Crater Statistics Obtained by AN Automatic Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Salih, A. L.; Mühlbauer, M.; Grumpe, A.; Pasckert, J. H.; Wöhler, C.; Hiesinger, H.

    2016-06-01

    The analysis of the impact crater size-frequency distribution (CSFD) is a well-established approach to the determination of the age of planetary surfaces. Classically, estimation of the CSFD is achieved by manual crater counting and size determination in spacecraft images, which, however, becomes very time-consuming for large surface areas and/or high image resolution. With increasing availability of high-resolution (nearly) global image mosaics of planetary surfaces, a variety of automated methods for the detection of craters based on image data and/or topographic data have been developed. In this contribution a template-based crater detection algorithm is used which analyses image data acquired under known illumination conditions. Its results are used to establish the CSFD for the examined area, which is then used to estimate the absolute model age of the surface. The detection threshold of the automatic crater detection algorithm is calibrated based on a region with available manually determined CSFD such that the age inferred from the manual crater counts corresponds to the age inferred from the automatic crater detection results. With this detection threshold, the automatic crater detection algorithm can be applied to a much larger surface region around the calibration area. The proposed age estimation method is demonstrated for a Kaguya Terrain Camera image mosaic of 7.4 m per pixel resolution of the floor region of the lunar crater Tsiolkovsky, which consists of dark and flat mare basalt and has an area of nearly 10,000 km2. The region used for calibration, for which manual crater counts are available, has an area of 100 km2. In order to obtain a spatially resolved age map, CSFDs and surface ages are computed for overlapping quadratic regions of about 4.4 x 4.4 km2 size offset by a step width of 74 m. Our constructed surface age map of the floor of Tsiolkovsky shows age values of typically 3.2-3.3 Ga, while for small regions lower (down to 2.9 Ga) and higher

  8. The implementation of an automated tracking algorithm for the track detection of migratory anticyclones affecting the Mediterranean

    NASA Astrophysics Data System (ADS)

    Hatzaki, Maria; Flocas, Elena A.; Simmonds, Ian; Kouroutzoglou, John; Keay, Kevin; Rudeva, Irina

    2013-04-01

    Migratory cyclones and anticyclones mainly account for the short-term weather variations in extra-tropical regions. By contrast to cyclones that have drawn major scientific attention due to their direct link to active weather and precipitation, climatological studies on anticyclones are limited, even though they also are associated with extreme weather phenomena and play an important role in global and regional climate. This is especially true for the Mediterranean, a region particularly vulnerable to climate change, and the little research which has been done is essentially confined to the manual analysis of synoptic charts. For the construction of a comprehensive climatology of migratory anticyclonic systems in the Mediterranean using an objective methodology, the Melbourne University automatic tracking algorithm is applied, based to the ERA-Interim reanalysis mean sea level pressure database. The algorithm's reliability in accurately capturing the weather patterns and synoptic climatology of the transient activity has been widely proven. This algorithm has been extensively applied for cyclone studies worldwide and it has been also successfully applied for the Mediterranean, though its use for anticyclone tracking is limited to the Southern Hemisphere. In this study the performance of the tracking algorithm under different data resolutions and different choices of parameter settings in the scheme is examined. Our focus is on the appropriate modification of the algorithm in order to efficiently capture the individual characteristics of the anticyclonic tracks in the Mediterranean, a closed basin with complex topography. We show that the number of the detected anticyclonic centers and the resulting tracks largely depend upon the data resolution and the search radius. We also find that different scale anticyclones and secondary centers that lie within larger anticyclone structures can be adequately represented; this is important, since the extensions of major

  9. Detection of co-colonization with Streptococcus pneumoniae by algorithmic use of conventional and molecular methods.

    PubMed

    Saha, Sudipta; Modak, Joyanta K; Naziat, Hakka; Al-Emran, Hassan M; Chowdury, Mrittika; Islam, Maksuda; Hossain, Belal; Darmstadt, Gary L; Whitney, Cynthia G; Saha, Samir K

    2015-01-29

    Detection of pneumococcal carriage by multiple co-colonizing serotypes is important in assessing the benefits of pneumococcal conjugate vaccine (PCV). Various methods differing in sensitivity, cost and technical complexity have been employed to detect multiple serotypes of pneumococcus in respiratory specimens. We have developed an algorithmic method to detect all known serotypes that preserves the relative abundance of specific serotypes by using Quellung-guided molecular techniques. The method involves culturing respiratory swabs followed by serotyping of 100 colonies by either capsular (10 colonies) or PCR (90 colonies) reactions on 96-well plates. The method was evaluated using 102 nasal swabs from children carrying pneumococcus. Multiple serotypes were detected in 22% of carriers, compared to 3% by World Health Organization (WHO)-recommended morphology-based selection of 1 to 3 colonies. Our method, with a processing cost of $87, could detect subdominant strains making up as low as 1% of the population. The method is affordable, practical, and capable of detecting all known serotypes without false positive reactions or change in the native distribution of multiple serotypes.

  10. Algorithm for automatic detection of the cardiovascular parameter PR-interval from LDV-velocity signals

    NASA Astrophysics Data System (ADS)

    Mignanelli, Laura; Rembe, Christian

    2016-06-01

    Laser-Doppler-vibrometry (LDV) is broadly employed in mechanical engineering but it has been demonstrated by several researchers that the technique has also large potential in biomedical applications. In particular, the detection of several vital parameters (heart rate, heart rate variability, respiration period) is known as optical vibrocardiography - VBCG. Recent studies have demonstrated the possibility of a reliable detection of the PR-interval (time between atria and ventricle contractions) and classification of the different types of atrioventricular (AV) blocks from this velocity signals. In this work, an algorithm for the localization of the vibrations generated by atrial contraction for the detection of the PR-interval in VBCG acquired on the thorax is presented. The determination of the time point of a heart beat can be extracted easily because it generates an unambiguous maximal velocity peak in the time data. Extracting the contraction of the atrium is more challenging because it is a characteristic signature with an amplitude at the magnitude of the signal disturbances. We compare different approaches of a cost function for the determination of the time point of the atria-contraction signature as well as different optimization algorithms to find the correct PR-time.

  11. A novel electrocardiogram parameterization algorithm and its application in myocardial infarction detection.

    PubMed

    Liu, Bin; Liu, Jikui; Wang, Guoqing; Huang, Kun; Li, Fan; Zheng, Yang; Luo, Youxi; Zhou, Fengfeng

    2015-06-01

    The electrocardiogram (ECG) is a biophysical electric signal generated by the heart muscle, and is one of the major measurements of how well a heart functions. Automatic ECG analysis algorithms usually extract the geometric or frequency-domain features of the ECG signals and have already significantly facilitated automatic ECG-based cardiac disease diagnosis. We propose a novel ECG feature by fitting a given ECG signal with a 20th order polynomial function, defined as PolyECG-S. The PolyECG-S feature is almost identical to the fitted ECG curve, measured by the Akaike information criterion (AIC), and achieved a 94.4% accuracy in detecting the Myocardial Infarction (MI) on the test dataset. Currently ST segment elongation is one of the major ways to detect MI (ST-elevation myocardial infarction, STEMI). However, many ECG signals have weak or even undetectable ST segments. Since PolyECG-S does not rely on the information of ST waves, it can be used as a complementary MI detection algorithm with the STEMI strategy. Overall, our results suggest that the PolyECG-S feature may satisfactorily reconstruct the fitted ECG curve, and is complementary to the existing ECG features for automatic cardiac function analysis.

  12. Diabetic retinopathy: a quadtree based blood vessel detection algorithm using RGB components in fundus images.

    PubMed

    Reza, Ahmed Wasif; Eswaran, C; Hati, Subhas

    2008-04-01

    Blood vessel detection in retinal images is a fundamental step for feature extraction and interpretation of image content. This paper proposes a novel computational paradigm for detection of blood vessels in fundus images based on RGB components and quadtree decomposition. The proposed algorithm employs median filtering, quadtree decomposition, post filtration of detected edges, and morphological reconstruction on retinal images. The application of preprocessing algorithm helps in enhancing the image to make it better fit for the subsequent analysis and it is a vital phase before decomposing the image. Quadtree decomposition provides information on the different types of blocks and intensities of the pixels within the blocks. The post filtration and morphological reconstruction assist in filling the edges of the blood vessels and removing the false alarms and unwanted objects from the background, while restoring the original shape of the connected vessels. The proposed method which makes use of the three color components (RGB) is tested on various images of publicly available database. The results are compared with those obtained by other known methods as well as with the results obtained by using the proposed method with the green color component only. It is shown that the proposed method can yield true positive fraction values as high as 0.77, which are comparable to or somewhat higher than the results obtained by other known methods. It is also shown that the effect of noise can be reduced if the proposed method is implemented using only the green color component.

  13. A multi-agent genetic algorithm for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Li, Zhangtao; Liu, Jing

    2016-05-01

    Complex networks are popularly used to represent a lot of practical systems in the domains of biology and sociology, and the structure of community is one of the most important network attributes which has received an enormous amount of attention. Community detection is the process of discovering the community structure hidden in complex networks, and modularity Q is one of the best known quality functions measuring the quality of communities of networks. In this paper, a multi-agent genetic algorithm, named as MAGA-Net, is proposed to optimize modularity value for the community detection. An agent, coded by a division of a network, represents a candidate solution. All agents live in a lattice-like environment, with each agent fixed on a lattice point. A series of operators are designed, namely split and merging based neighborhood competition operator, hybrid neighborhood crossover, adaptive mutation and self-learning operator, to increase modularity value. In the experiments, the performance of MAGA-Net is validated on both well-known real-world benchmark networks and large-scale synthetic LFR networks with 5000 nodes. The systematic comparisons with GA-Net and Meme-Net show that MAGA-Net outperforms these two algorithms, and can detect communities with high speed, accuracy and stability.

  14. Dynamic connectivity detection: an algorithm for determining functional connectivity change points in fMRI data.

    PubMed

    Xu, Yuting; Lindquist, Martin A

    2015-01-01

    Recently there has been an increased interest in using fMRI data to study the dynamic nature of brain connectivity. In this setting, the activity in a set of regions of interest (ROIs) is often modeled using a multivariate Gaussian distribution, with a mean vector and covariance matrix that are allowed to vary as the experiment progresses, representing changing brain states. In this work, we introduce the Dynamic Connectivity Detection (DCD) algorithm, which is a data-driven technique to detect temporal change points in functional connectivity, and estimate a graph between ROIs for data within each segment defined by the change points. DCD builds upon the framework of the recently developed Dynamic Connectivity Regression (DCR) algorithm, which has proven efficient at detecting changes in connectivity for problems consisting of a small to medium (< 50) number of regions, but which runs into computational problems as the number of regions becomes large (>100). The newly proposed DCD method is faster, requires less user input, and is better able to handle high-dimensional data. It overcomes the shortcomings of DCR by adopting a simplified sparse matrix estimation approach and a different hypothesis testing procedure to determine change points. The application of DCD to simulated data, as well as fMRI data, illustrates the efficacy of the proposed method.

  15. Multiple Kernel Learning for Heterogeneous Anomaly Detection: Algorithm and Aviation Safety Case Study

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.

    2010-01-01

    The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods

  16. Solar Power Ramp Events Detection Using an Optimized Swinging Door Algorithm

    SciTech Connect

    Cui, Mingjian; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang

    2015-08-05

    Solar power ramp events (SPREs) significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to enhance the state of the art in SPRE detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.

  17. Solar Power Ramp Events Detection Using an Optimized Swinging Door Algorithm: Preprint

    SciTech Connect

    Cui, Mingjian; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang

    2015-08-07

    Solar power ramp events (SPREs) are those that significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.

  18. Orthogonal sensor suite and the signal-processing algorithm for human detection and discrimination

    NASA Astrophysics Data System (ADS)

    Ekimov, Alexander; Sabatier, James M.

    2009-05-01

    The focus of this paper is a review of methods and algorithms for human motion detection in the presence of nonstationary environmental background noise. Human footstep forces on the ground/floor generate periodic broadband seismic and sound signals envelopes with two characteristic times, T1 (the footstep repetition time, which is equal to the time of the whole body periodic vibrations) and T2 (the footstep duration time, which is equal to the time interval for a single footstep from "heel strike" to "toe slap and weight transfer"). Human body motions due to walking are periodic movements of a multiple-degrees-of-freedom mechanical system with a specific cadence frequency equal to 1/T1. For a walking human, the cadence frequencies for the appendages are the same and lie below 3 Hz. Simultaneously collecting footstep seismic, ultrasonic, and Doppler signals of human motion enhance the capability to detect humans in quiet and noisy environments. The common denominator of in the use of these orthogonal sensors (seismic, ultrasonic, Doppler) is a signal-processing algorithm package that allows detection of human-specific time-frequency signatures and discriminates them using a distinct cadence frequency from signals produced by other moving and stationary objects (e.g. vehicular and animal signatures). It has been experimentally shown that human cadence frequencies for seismic, passive ultrasonic, and Doppler motion signatures are equivalent and temporally stable.

  19. A novel through-wall respiration detection algorithm using UWB radar.

    PubMed

    Li, Xin; Qiao, Dengyu; Li, Ye; Dai, Huhe

    2013-01-01

    Through-wall respiration detection using Ultra-wideband (UWB) impulse radar can be applied to the post-disaster rescue, e.g., searching living persons trapped in ruined buildings after an earthquake. Since strong interference signals always exist in the real-life scenarios, such as static clutter, noise, etc., while the respiratory signal is very weak, the signal to noise and clutter ratio (SNCR) is quite low. Therefore, through-wall respiration detection using UWB impulse radar under low SNCR is a challenging work in the research field of searching survivors after disaster. In this paper, an improved UWB respiratory signal model is built up based on an even power of cosine function for the first time. This model is used to reveal the harmonic structure of respiratory signal, based on which a novel high-performance respiration detection algorithm is proposed. This novel algorithm is assessed by experimental verification and simulation and shows about a 1.5dB improvement of SNR and SNCR.

  20. Defect-detection algorithm for noncontact acoustic inspection using spectrum entropy

    NASA Astrophysics Data System (ADS)

    Sugimoto, Kazuko; Akamatsu, Ryo; Sugimoto, Tsuneyoshi; Utagawa, Noriyuki; Kuroda, Chitose; Katakura, Kageyoshi

    2015-07-01

    In recent years, the detachment of concrete from bridges or tunnels and the degradation of concrete structures have become serious social problems. The importance of inspection, repair, and updating is recognized in measures against degradation. We have so far studied the noncontact acoustic inspection method using airborne sound and the laser Doppler vibrometer. In this method, depending on the surface state (reflectance, dirt, etc.), the quantity of the light of the returning laser decreases and optical noise resulting from the leakage of light reception arises. Some influencing factors are the stability of the output of the laser Doppler vibrometer, the low reflective characteristic of the measurement surface, the diffused reflection characteristic, measurement distance, and laser irradiation angle. If defect detection depends only on the vibration energy ratio since the frequency characteristic of the optical noise resembles white noise, the detection of optical noise resulting from the leakage of light reception may indicate a defective part. Therefore, in this work, the combination of the vibrational energy ratio and spectrum entropy is used to judge whether a measured point is healthy or defective or an abnormal measurement point. An algorithm that enables more vivid detection of a defective part is proposed. When our technique was applied in an experiment with real concrete structures, the defective part could be extracted more vividly and the validity of our proposed algorithm was confirmed.

  1. CK-LPA: Efficient community detection algorithm based on label propagation with community kernel

    NASA Astrophysics Data System (ADS)

    Lin, Zhen; Zheng, Xiaolin; Xin, Nan; Chen, Deren

    2014-12-01

    With the rapid development of Web 2.0 and the rise of online social networks, finding community structures from user data has become a hot topic in network analysis. Although research achievements are numerous at present, most of these achievements cannot be adopted in large-scale social networks because of heavy computation. Previous studies have shown that label propagation is an efficient means to detect communities in social networks and is easy to implement; however, some drawbacks, such as low accuracy, high randomness, and the formation of a “monster” community, have been found. In this study, we propose an efficient community detection method based on the label propagation algorithm (LPA) with community kernel (CK-LPA). We assign a corresponding weight to each node according to node importance in the whole network and update node labels in sequence based on weight. Then, we discuss the composition of weights, the label updating strategy, the label propagation strategy, and the convergence conditions. Compared with the primitive LPA, existing drawbacks are solved by CK-LPA. Experiments and benchmarks reveal that our proposed method sustains nearly linear time complexity and exhibits significant improvements in the quality aspect of static community detection. Hence, the algorithm can be applied in large-scale social networks.

  2. Damage detection using successive parameter subset selections and multiple modal residuals

    NASA Astrophysics Data System (ADS)

    Titurus, B.; Friswell, M. I.

    2014-03-01

    The use of modal response residuals and parameterized models forms the framework for parameter subset selection based damage detection. This research explores the novel approach in this class of methods which is characterized by the successive application of the homogeneous modal response residuals. The motivation behind this approach is to restrict the use of unknown weighting factors which are employed in cases with mixed response residuals. Particular attention is given to the parameter-effect symmetry issues and large nonlinear changes in response residuals due to increasing damage observed across multiple damage levels. A case study involving a real aluminum three-dimensional frame structure with a loose joint connection is used to demonstrate the approach and its ability to localize the damaged area.

  3. Screening for sexually transmitted diseases in rural women in Papua New Guinea: are WHO therapeutic algorithms appropriate for case detection?

    PubMed Central

    Passey, M.; Mgone, C. S.; Lupiwa, S.; Tiwara, S.; Lupiwa, T.; Alpers, M. P.

    1998-01-01

    The presence of a large reservoir of untreated sexually transmitted diseases (STDs) in developing countries has prompted a number of suggestions for improving case detection, including the use of clinical algorithms and risk assessments to identify women likely to be infected when they present to clinics for other reasons. We used data from a community-based study of STDs to develop and evaluate algorithms for detection of cervical infection with Chlamydia trachomatis or Neisseria gonorrhoeae, and for detection of vaginal infection with Trichomonas vaginalis or bacterial vaginosis. The algorithms were derived using data from 192 randomly selected women, then evaluated on 200 self-selected women. We evaluated the WHO algorithm for vaginal discharge in both groups. The prevalences of cervical and vaginal infection in the randomly selected group were 27% and 50%, respectively, and 23% and 52%, respectively, in the self-selected group. The derived algorithms had high sensitivities in both groups, but poor specificities in the self-selected women, and the positive predictive values were unacceptably low. The WHO algorithms had extremely low sensitivity for detecting either vaginal or cervical infection because relatively few women reported vaginal discharge. Simple algorithms and risk assessments are not valid for case detection in this population. PMID:9803591

  4. New algorithm for detecting smaller retinal blood vessels in fundus images

    NASA Astrophysics Data System (ADS)

    LeAnder, Robert; Bidari, Praveen I.; Mohammed, Tauseef A.; Das, Moumita; Umbaugh, Scott E.

    2010-03-01

    About 4.1 million Americans suffer from diabetic retinopathy. To help automatically diagnose various stages of the disease, a new blood-vessel-segmentation algorithm based on spatial high-pass filtering was developed to automatically segment blood vessels, including the smaller ones, with low noise. Methods: Image database: Forty, 584 x 565-pixel images were collected from the DRIVE image database. Preprocessing: Green-band extraction was used to obtain better contrast, which facilitated better visualization of retinal blood vessels. A spatial highpass filter of mask-size 11 was applied. A histogram stretch was performed to enhance contrast. A median filter was applied to mitigate noise. At this point, the gray-scale image was converted to a binary image using a binary thresholding operation. Then, a NOT operation was performed by gray-level value inversion between 0 and 255. Postprocessing: The resulting image was AND-ed with its corresponding ring mask to remove the outer-ring (lens-edge) artifact. At this point, the above algorithm steps had extracted most of the major and minor vessels, with some intersections and bifurcations missing. Vessel segments were reintegrated using the Hough transform. Results: After applying the Hough transform, both the average peak SNR and the RMS error improved by 10%. Pratt's Figure of Merit (PFM) was decreased by 6%. Those averages were better than [1] by 10-30%. Conclusions: The new algorithm successfully preserved the details of smaller blood vessels and should prove successful as a segmentation step for automatically identifying diseases that affect retinal blood vessels.

  5. GPU implementation of target and anomaly detection algorithms for remotely sensed hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Paz, Abel; Plaza, Antonio

    2010-08-01

    Automatic target and anomaly detection are considered very important tasks for hyperspectral data exploitation. These techniques are now routinely applied in many application domains, including defence and intelligence, public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent explosion in the amount and dimensionality of hyperspectral imagery, this problem calls for the incorporation of parallel computing techniques. In the past, clusters of computers have offered an attractive solution for fast anomaly and target detection in hyperspectral data sets already transmitted to Earth. However, these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time, i.e., at the same time as the data is collected by the sensor. An exciting new development in the field of commodity computing is the emergence of commodity graphics processing units (GPUs), which can now bridge the gap towards on-board processing of remotely sensed hyperspectral data. In this paper, we describe several new GPU-based implementations of target and anomaly detection algorithms for hyperspectral data exploitation. The parallel algorithms are implemented on latest-generation Tesla C1060 GPU architectures, and quantitatively evaluated using hyperspectral data collected by NASA's AVIRIS system over the World Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in the WTC complex.

  6. Algorithms Performance Investigation of a Generalized Spreader-Bar Detection System

    SciTech Connect

    Robinson, Sean M.; Ashbaker, Eric D.; Hensley, Walter K.; Schweppe, John E.; Sandness, Gerald A.; Erikson, Luke E.; Ely, James H.

    2010-10-01

    A “generic” gantry-crane-mounted spreader bar detector has been simulated in the Monte-Carlo radiation transport code MCNP [1]. This model is intended to represent the largest realistically feasible number of detector crystals in a single gantry-crane model intended to sit atop an InterModal Cargo Container (IMCC). Detectors were chosen from among large commonly-available sodium iodide (NaI) crystal scintillators and spaced as evenly as is thought possible with a detector apparatus attached to a gantry crane. Several scenarios were simulated with this model, based on a single IMCC being moved between a ship’s deck or cargo hold and the dock. During measurement, the gantry crane will carry that IMCC through the air and lower it onto a receiving vehicle (e.g. a chassis or a bomb cart). The case of an IMCC being moved through the air from an unknown radiological environment to the ground is somewhat complex; for this initial study a single location was picked at which to simulate background. An HEU source based on earlier validated models was used, and placed at varying depths in a wood cargo. Many statistical realizations of these scenarios are constructed from simulations of the component spectra, simulated to have high statistics. The resultant data are analyzed with several different algorithms. The simulated data were evaluated by each algorithm, with a threshold set to a statistical-only false alarm probability of 0.001 and the resultant Minimum Detectable Amounts were generated for each Cargo depth possible within the IMCC. Using GADRAS as an anomaly detector provided the greatest detection sensitivity, and it is expected that an algorithm similar to this will be of great use to the detection of highly shielded sources.

  7. Algorithms for detecting concealed knowledge among groups when the critical information is unavailable.

    PubMed

    Breska, Assaf; Ben-Shakhar, Gershon; Gronau, Nurit

    2012-09-01

    We examined whether the Concealed Information Test (CIT) may be used when the critical details are unavailable to investigators (the Searching CIT [SCIT]). This use may have important applications in criminal investigations (e.g., finding the location of a murder weapon) and in security-related threats (e.g., detecting individuals and groups suspected in planning a terror attack). Two classes of algorithms designed to detect the critical items and classify individuals in the SCIT were examined. The 1st class was based on averaging responses across subjects to identify critical items and on averaging responses across the identified critical items to identify knowledgeable subjects. The 2nd class used clustering methods based on the correlations between the response profiles of all subject pairs. We applied a principal component analysis to decompose the correlation matrix into its principal components and defined the detection score as the coefficient of each subject on the component that explained the largest portion of the variance. Reanalysis of 3 data sets from previous CIT studies demonstrated that in most cases the efficiency of differentiation between knowledgeable and unknowledgeable subjects in the SCIT (indexed by the area under the receiver operating characteristic curve) approached that of the standard CIT for both algorithms. We also examined the robustness of our results to variations in the number of knowledgeable and unknowledgeable subjects in the sample. This analysis demonstrated that the performance of our algorithms is relatively robust to changes in the number of individuals examined in each group, provided that at least 2 (but desirably 5 or more) knowledgeable examinees are included.

  8. An algorithm to detect and communicate the differences in computational models describing biological systems

    PubMed Central

    Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar

    2016-01-01

    Motivation: Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model’s development over time. Results: Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models’ encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model’s history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. Availability and implementation: The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. Contact: martin.scharm@uni-rostock.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26490504

  9. Advances in multiplex PCR: balancing primer efficiencies and improving detection success

    PubMed Central

    Sint, Daniela; Raso, Lorna; Traugott, Michael

    2012-01-01

    1. Multiplex PCR is a valuable tool in many biological studies but it is a multifaceted procedure that has to be planned and optimised thoroughly to achieve robust and meaningful results. In particular, primer concentrations have to be adjusted to assure an even amplification of all targeted DNA fragments. Until now, total DNA extracts were used for balancing primer efficiencies; however, the applicability for comparisons between taxa or different multiple-copy genes was limited owing to the unknown number of template molecules present per total DNA. 2. Based on a multiplex system developed to track trophic interactions in high Alpine arthropods, we demonstrate a fast and easy way of generating standardised DNA templates. These were then used to balance the amplification success for the different targets and to subsequently determine the sensitivity of each primer pair in the multiplex PCR. 3. In the current multiplex assay, this approach led to an even amplification success for all seven targeted DNA fragments. Using this balanced multiplex PCR, methodological bias owing to variation in primer efficiency will be avoided when analysing field-derived samples. 4. The approach outlined here allows comparing multiplex PCR sensitivity, independent of the investigated species, genome size or the targeted genes. The application of standardised DNA templates not only makes it possible to optimise primer efficiency within a given multiplex PCR, but it also offers to adjust and/or to compare the sensitivity between different assays. Along with other factors that influence the success of multiplex reactions, and which we discuss here in relation to the presented detection system, the adoption of this approach will allow for direct comparison of multiplex PCR data between systems and studies, enhancing the utility of this assay type. PMID:23549328

  10. Detection of Atmospheric Rivers: An Algorithm for Global Climatology and Model Evaluation Studies

    NASA Astrophysics Data System (ADS)

    Guan, B.; Waliser, D. E.

    2015-12-01

    Atmospheric rivers (ARs) are narrow, elongated, synoptic jets of water vapor that play important roles in the global water cycle and regional weather and hydrology. Previous studies have developed techniques for the identification of ARs based on intensity and/or geometry thresholds indicative of AR conditions. Such techniques have facilitated the investigation of ARs on local to regional scales. Recent advancement in the understanding of AR's global signatures and impacts (including those in less explored areas such as Greenland and Antarctica), and the need for understanding the representation of key AR characteristics in global weather/climate models motivate the development and evaluation of AR detection techniques suitable for global climatological and model evaluation studies. In this work, an objective AR detection algorithm is developed based on thresholding global, 6-hourly fields of integrated water vapor transport (IVT) derived from ERA-Interim reanalysis. Long, narrow filaments of enhanced IVT are detected by applying a set of intensity and geometry criteria, along with other considerations. Key output of the algorithm includes the AR shape boundary, main axis, location of landfalls, and a tabulated list of the basic statistics such as length, width, and mean IVT strength/direction of each detected AR. Sensitivity of detection is examined for selected parameters, and the result is evaluated and compared with an independent database of landfalling ARs in the west coast of North America based on satellite images of integrated water vapor (Neiman et al. 2008). Global distribution of key AR characteristics, and examples of their modulation by climate variability, will be presented.

  11. Development of an apnea detection algorithm based on temporal analysis of thoracic respiratory effort signal

    NASA Astrophysics Data System (ADS)

    Dell'Aquila, C. R.; Cañadas, G. E.; Correa, L. S.; Laciar, E.

    2016-04-01

    This work describes the design of an algorithm for detecting apnea episodes, based on analysis of thorax respiratory effort signal. Inspiration and expiration time, and range amplitude of respiratory cycle were evaluated. For range analysis the standard deviation statistical tool was used over respiratory signal temporal windows. The validity of its performance was carried out in 8 records of Apnea-ECG database that has annotations of apnea episodes. The results are: sensitivity (Se) 73%, specificity (Sp) 83%. These values can be improving eliminating artifact of signal records.

  12. Understanding conflict-resolution taskload: Implementing advisory conflict-detection and resolution algorithms in an airspace

    NASA Astrophysics Data System (ADS)

    Vela, Adan Ernesto

    2011-12-01

    From 2010 to 2030, the number of instrument flight rules aircraft operations handled by Federal Aviation Administration en route traffic centers is predicted to increase from approximately 39 million flights to 64 million flights. The projected growth in air transportation demand is likely to result in traffic levels that exceed the abilities of the unaided air traffic controller in managing, separating, and providing services to aircraft. Consequently, the Federal Aviation Administration, and other air navigation service providers around the world, are making several efforts to improve the capacity and throughput of existing airspaces. Ultimately, the stated goal of the Federal Aviation Administration is to triple the available capacity of the National Airspace System by 2025. In an effort to satisfy air traffic demand through the increase of airspace capacity, air navigation service providers are considering the inclusion of advisory conflict-detection and resolution systems. In a human-in-the-loop framework, advisory conflict-detection and resolution decision-support tools identify potential conflicts and propose resolution commands for the air traffic controller to verify and issue to aircraft. A number of researchers and air navigation service providers hypothesize that the inclusion of combined conflict-detection and resolution tools into air traffic control systems will reduce or transform controller workload and enable the required increases in airspace capacity. In an effort to understand the potential workload implications of introducing advisory conflict-detection and resolution tools, this thesis provides a detailed study of the conflict event process and the implementation of conflict-detection and resolution algorithms. Specifically, the research presented here examines a metric of controller taskload: how many resolution commands an air traffic controller issues under the guidance of a conflict-detection and resolution decision-support tool. The goal

  13. It may be possible to use Speech Recognition Algorithms to sort through Particle Detection

    NASA Astrophysics Data System (ADS)

    Kriske, Richard

    There are some similarities between recognizing speech and written language and in recognizing Particle interaction and decays. In the Viterbi Algorithm or speech recognition, a target word is recursively compared with the unknown utterance. Say one remembered the word Motion in a song and wanted to find that song. First the letter M is typed in and the most common words with M show up say it is the word ''Menards'', then an ''O'' is typed in and statistically the most common word is now ''Movies'', now the ''t'' is typed in and the most common word is ''Motley Crue'' finally all the letters are typed in and the song that matches is ''Motion Lyrics''. We all recognize the Algorithm and perhaps a few have realized that this Algorithm could also be applied to Decay Chains in Particle Scattering and Detection. Also there may come a day when perhaps Neutrinos where transmitted with the purpose of Communication, one system would be to use a type of ''Morse Code'', but another could be to use Decay Chains themselves. Perhaps the sender could tune the Energy such that the information received would rely on the Energy being transmitted, since it may be that only a few of the particles are received, too few for ''Morse Code'' to work.

  14. Detection of nasopharyngeal cancer using confocal Raman spectroscopy and genetic algorithm technique

    NASA Astrophysics Data System (ADS)

    Li, Shao-Xin; Chen, Qiu-Yan; Zhang, Yan-Jiao; Liu, Zhi-Ming; Xiong, Hong-Lian; Guo, Zhou-Yi; Mai, Hai-Qiang; Liu, Song-Hao

    2012-12-01

    Raman spectroscopy (RS) and a genetic algorithm (GA) were applied to distinguish nasopharyngeal cancer (NPC) from normal nasopharyngeal tissue. A total of 225 Raman spectra are acquired from 120 tissue sites of 63 nasopharyngeal patients, 56 Raman spectra from normal tissue and 169 Raman spectra from NPC tissue. The GA integrated with linear discriminant analysis (LDA) is developed to differentiate NPC and normal tissue according to spectral variables in the selected regions of 792-805, 867-880, 996-1009, 1086-1099, 1288-1304, 1663-1670, and 1742-1752 cm-1 related to proteins, nucleic acids and lipids of tissue. The GA-LDA algorithms with the leave-one-out cross-validation method provide a sensitivity of 69.2% and specificity of 100%. The results are better than that of principal component analysis which is applied to the same Raman dataset of nasopharyngeal tissue with a sensitivity of 63.3% and specificity of 94.6%. This demonstrates that Raman spectroscopy associated with GA-LDA diagnostic algorithm has enormous potential to detect and diagnose nasopharyngeal cancer.

  15. A diabetic retinopathy detection method using an improved pillar K-means algorithm.

    PubMed

    Gogula, Susmitha Valli; Divakar, Ch; Satyanarayana, Ch; Rao, Allam Appa

    2014-01-01

    The paper presents a new approach for medical image segmentation. Exudates are a visible sign of diabetic retinopathy that is the major reason of vision loss in patients with diabetes. If the exudates extend into the macular area, blindness may occur. Automated detection of exudates will assist ophthalmologists in early diagnosis. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after getting optimized by Pillar algorithm; pillars are constructed in such a way that they can withstand the pressure. Improved pillar algorithm can optimize the K-means clustering for image segmentation in aspects of precision and computation time. This evaluates the proposed approach for image segmentation by comparing with Kmeans and Fuzzy C-means in a medical image. Using this method, identification of dark spot in the retina becomes easier and the proposed algorithm is applied on diabetic retinal images of all stages to identify hard and soft exudates, where the existing pillar K-means is more appropriate for brain MRI images. This proposed system help the doctors to identify the problem in the early stage and can suggest a better drug for preventing further retinal damage.

  16. Detection and mapping vegetation cover based on the Spectral Angle Mapper algorithm using NOAA AVHRR data

    NASA Astrophysics Data System (ADS)

    Yagoub, Houria; Belbachir, Ahmed Hafid; Benabadji, Noureddine

    2014-06-01

    Satellite data, taken from the National Oceanic and Atmospheric Administration (NOAA) have been proposed and used for the detection and the cartography of vegetation cover in North Africa. The data used were acquired at the Analysis and Application of Radiation Laboratory (LAAR) from the Advanced Very High Resolution Radiometer (AVHRR) sensor of 1 km spatial resolution. The Spectral Angle Mapper Algorithm (SAM) is used for the classification of many studies using high resolution satellite data. In the present paper, we propose to apply the SAM algorithm to the moderate resolution of the NOAA AVHRR sensor data for classifying the vegetation cover. This study allows also exploiting other classification methods for the low resolution. First, the normalized difference vegetation index (NDVI) is extracted from two channels 1 and 2 of the AVHRR sensor. In order to obtain an initial density representation of vegetal formation distribution, a methodology, based on the combination between the threshold method and the decision tree, is used. This combination is carried out due to the lack of accurate data related to the thresholds that delimit each class. In a second time, and based on spectral behavior, a vegetation cover map is developed using SAM algorithm. Finally, with the use of low resolution satellite images (NOAA AVHRR) and with only two channels, it is possible to identify the most dominant species in North Africa such as: forests of the Liege oaks, other forests, cereal's cultivation, steppes and bar soil.

  17. A Novel Thresholding Based Algorithm for Detection of Vertical Root Fracture in Nonendodontically Treated Premolar Teeth

    PubMed Central

    Johari, Masume; Esmaeili, Farzad; Andalib, Alireza; Garjani, Shabnam; Saberkari, Hamidreza

    2016-01-01

    In this paper, an efficient algorithm is proposed for detection of vertical root fractures (VRFs) in periapical (PA), and cone-beam computed tomography (CBCT) radiographs of nonendodontically treated premolar teeth. PA and CBCT images are divided into some sub-categories based on the fracture space between the two fragments as small, medium, and large for PAs and large for CBCTs. These graphics are first denoised using the combination of block matching 3-D filtering, and principle component analysis model. Then, we proposed an adaptive thresholding algorithm based on the modified Wellner model to segment the fracture and canal. Finally, VRFs are identified with a high accuracy through applying continuous wavelet transform on the segmented radiographs and choosing the most optimal value for sub-images based on the lowest interclass variance. Performance of the proposed algorithm is evaluated utilizing the different tested criteria. Results illustrate that the range of specificity deviations for PA and CBCT radiographs are 99.69 ± 0.22 and 99.02 ± 0.77, respectively. Furthermore, the sensitivity changes from 61.90 to 77.39 in the case of PA and from 79.54 to 100 in the case of CBCT. Based on our statistical evaluation, the CBCT imaging has the better performance in comparison with PA ones, so this technique could be a useful tool for clinical applications in determining the VRFs. PMID:27186535

  18. Application of centerline detection and deformable contours algorithms to segmenting the carotid lumen

    NASA Astrophysics Data System (ADS)

    Hachaj, Tomasz; Ogiela, Marek R.

    2014-03-01

    The main contribution of this article is to evaluate the utility of different state-of-the-art deformable contour models for segmenting carotid lumen walls from computed tomography angiography images. We have also proposed and tested a new tracking-based lumen segmentation method based on our evaluation results. The deformable contour algorithm (snake) is used to detect the outer wall of the vessel. We have examined four different snakes: with a balloon, distance, and a gradient vector flow force and the method of active contours without edges. The algorithms were evaluated on a set of 32 artery lumens-16 from the common carotid artery (CCA)-the internal carotid artery section and 16 from the CCA-the external carotid artery section-in order to find the optimum deformable contour model for this task. Later, we evaluated different values of energy terms in the method of active contours without edges, which turned out to be the best for our dataset, in order to find the optimal values for this particular segmentation task. The choice of particular weights in the energy term was evaluated statistically. The final Dice's coefficient at the level of 0.939±0.049 puts our algorithm among the best state-of-the-art methods for these solutions.

  19. Research on the filtering algorithm in speed and position detection of maglev trains.

    PubMed

    Dai, Chunhui; Long, Zhiqiang; Xie, Yunde; Xue, Song

    2011-01-01

    This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS) train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train's structure, the permanent magnet electrodynamic suspension (EDS) train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD) and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally.

  20. Research on the Filtering Algorithm in Speed and Position Detection of Maglev Trains

    PubMed Central

    Dai, Chunhui; Long, Zhiqiang; Xie, Yunde; Xue, Song

    2011-01-01

    This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS) train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train’s structure, the permanent magnet electrodynamic suspension (EDS) train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD) and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally. PMID:22164012

  1. Computer simulation and evaluation of edge detection algorithms and their application to automatic path selection

    NASA Technical Reports Server (NTRS)

    Longendorfer, B. A.

    1976-01-01

    The construction of an autonomous roving vehicle requires the development of complex data-acquisition and processing systems, which determine the path along which the vehicle travels. Thus, a vehicle must possess algorithms which can (1) reliably detect obstacles by processing sensor data, (2) maintain a constantly updated model of its surroundings, and (3) direct its immediate actions to further a long range plan. The first function consisted of obstacle recognition. Obstacles may be identified by the use of edge detection techniques. Therefore, the Kalman Filter was implemented as part of a large scale computer simulation of the Mars Rover. The second function consisted of modeling the environment. The obstacle must be reconstructed from its edges, and the vast amount of data must be organized in a readily retrievable form. Therefore, a Terrain Modeller was developed which assembled and maintained a rectangular grid map of the planet. The third function consisted of directing the vehicle's actions.

  2. Develop algorithms to improve detectability of defects in Sonic IR imaging NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2016-02-01

    Sonic Infrared (IR) technology is relative new in the NDE family. It is a fast, wide area imaging method. It combines ultrasound excitation and infrared imaging while the former to apply ultrasound energy thus induce friction heating in defects and the latter to capture the IR emission from the target. This technology can detect both surface and subsurface defects such as cracks and disbands/delaminations in various materials, metal/metal alloy or composites. However, certain defects may results in only very small IR signature be buried in noise or heating patterns. In such cases, to effectively extract the defect signals becomes critical in identifying the defects. In this paper, we will present algorithms which are developed to improve the detectability of defects in Sonic IR.

  3. A robust active contour edge detection algorithm based on local Gaussian statistical model for oil slick remote sensing image

    NASA Astrophysics Data System (ADS)

    Jing, Yu; Wang, Yaxuan; Liu, Jianxin; Liu, Zhaoxia

    2015-08-01

    Edge detection is a crucial method for the location and quantity estimation of oil slick when oil spills on the sea. In this paper, we present a robust active contour edge detection algorithm for oil spill remote sensing images. In the proposed algorithm, we define a local Gaussian data fitting energy term with spatially varying means and variances, and this data fitting energy term is introduced into a global minimization active contour (GMAC) framework. The energy function minimization is achieved fast by a dual formulation of the weighted total variation norm. The proposed algorithm avoids the existence of local minima, does not require the definition of initial contour, and is robust to weak boundaries, high noise and severe intensity inhomogeneity exiting in oil slick remote sensing images. Furthermore, the edge detection of oil slick and the correction of intensity inhomogeneity are simultaneously achieved via the proposed algorithm. The experiment results have shown that a superior performance of proposed algorithm over state-of-the-art edge detection algorithms. In addition, the proposed algorithm can also deal with the special images with the object and background of the same intensity means but different variances.

  4. [The Change Detection of High Spatial Resolution Remotely Sensed Imagery Based on OB-HMAD Algorithm and Spectral Features].

    PubMed

    Chen, Qiang; Chen, Yun-hao; Jiang, Wei-guo

    2015-06-01

    The high spatial resolution remotely sensed imagery has abundant detailed information of earth surface, and the multi-temporal change detection for the high resolution remotely sensed imagery can realize the variations of geographical unit. In terms of the high spatial resolution remotely sensed imagery, the traditional remote sensing change detection algorithms have obvious defects. In this paper, learning from the object-based image analysis idea, we proposed a semi-automatic threshold selection algorithm named OB-HMAD (object-based-hybrid-MAD), on the basis of object-based image analysis and multivariate alternative detection algorithm (MAD), which used the spectral features of remotely sensed imagery into the field of object-based change detection. Additionally, OB-HMAD algorithm has been compared with other the threshold segmentation algorithms by the change detection experiment. Firstly, we obtained the image object by the multi-solution segmentation algorithm. Secondly, we got the object-based difference image object using MAD and minimum noise fraction rotation (MNF) for improving the SNR of the image object. Then, the change objects or area are classified using histogram curvature analysis (HCA) method for the semi-automatic threshold selection, which determined the threshold by calculated the maximum value of curvature of the histogram, so the HCA algorithm has better automation than other threshold segmentation algorithms. Finally, the change detection results are validated using confusion matrix with the field sample data. Worldview-2 imagery of 2012 and 2013 in case study of Beijing were used to validate the proposed OB-HMAD algorithm. The experiment results indicated that OB-HMAD algorithm which integrated the multi-channel spectral information could be effectively used in multi-temporal high resolution remotely sensed imagery change detection, and it has basically solved the "salt and pepper" problem which always exists in the pixel-based change

  5. [The Change Detection of High Spatial Resolution Remotely Sensed Imagery Based on OB-HMAD Algorithm and Spectral Features].

    PubMed

    Chen, Qiang; Chen, Yun-hao; Jiang, Wei-guo

    2015-06-01

    The high spatial resolution remotely sensed imagery has abundant detailed information of earth surface, and the multi-temporal change detection for the high resolution remotely sensed imagery can realize the variations of geographical unit. In terms of the high spatial resolution remotely sensed imagery, the traditional remote sensing change detection algorithms have obvious defects. In this paper, learning from the object-based image analysis idea, we proposed a semi-automatic threshold selection algorithm named OB-HMAD (object-based-hybrid-MAD), on the basis of object-based image analysis and multivariate alternative detection algorithm (MAD), which used the spectral features of remotely sensed imagery into the field of object-based change detection. Additionally, OB-HMAD algorithm has been compared with other the threshold segmentation algorithms by the change detection experiment. Firstly, we obtained the image object by the multi-solution segmentation algorithm. Secondly, we got the object-based difference image object using MAD and minimum noise fraction rotation (MNF) for improving the SNR of the image object. Then, the change objects or area are classified using histogram curvature analysis (HCA) method for the semi-automatic threshold selection, which determined the threshold by calculated the maximum value of curvature of the histogram, so the HCA algorithm has better automation than other threshold segmentation algorithms. Finally, the change detection results are validated using confusion matrix with the field sample data. Worldview-2 imagery of 2012 and 2013 in case study of Beijing were used to validate the proposed OB-HMAD algorithm. The experiment results indicated that OB-HMAD algorithm which integrated the multi-channel spectral information could be effectively used in multi-temporal high resolution remotely sensed imagery change detection, and it has basically solved the "salt and pepper" problem which always exists in the pixel-based change

  6. An algorithmic approach for breakage-fusion-bridge detection in tumor genomes.

    PubMed

    Zakov, Shay; Kinsella, Marcus; Bafna, Vineet

    2013-04-01

    Breakage-fusion-bridge (BFB) is a mechanism of genomic instability characterized by the joining and subsequent tearing apart of sister chromatids. When this process is repeated during multiple rounds of cell division, it leads to patterns of copy number increases of chromosomal segments as well as fold-back inversions where duplicated segments are arranged head-to-head. These structural variations can then drive tumorigenesis. BFB can be observed in progress using cytogenetic techniques, but generally BFB must be inferred from data such as microarrays or sequencing collected after BFB has ceased. Making correct inferences from this data is not straightforward, particularly given the complexity of some cancer genomes and BFB's ability to generate a wide range of rearrangement patterns. Here we present algorithms to aid the interpretation of evidence for BFB. We first pose the BFB count-vector problem: given a chromosome segmentation and segment copy numbers, decide whether BFB can yield a chromosome with the given segment counts. We present a linear time algorithm for the problem, in contrast to a previous exponential time algorithm. We then combine this algorithm with fold-back inversions to develop tests for BFB. We show that, contingent on assumptions about cancer genome evolution, count vectors and fold-back inversions are sufficient evidence for detecting BFB. We apply the presented techniques to paired-end sequencing data from pancreatic tumors and confirm a previous finding of BFB as well as identify a chromosomal region likely rearranged by BFB cycles, demonstrating the practicality of our approach.

  7. Development of a new time domain-based algorithm for train detection and axle counting

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Meli, E.; Pugi, L.

    2015-12-01

    This paper presents an innovative train detection algorithm, able to perform the train localisation and, at the same time, to estimate its speed, the crossing times on a fixed point of the track and the axle number. The proposed solution uses the same approach to evaluate all these quantities, starting from the knowledge of generic track inputs directly measured on the track (for example, the vertical forces on the sleepers, the rail deformation and the rail stress). More particularly, all the inputs are processed through cross-correlation operations to extract the required information in terms of speed, crossing time instants and axle counter. This approach has the advantage to be simple and less invasive than the standard ones (it requires less equipment) and represents a more reliable and robust solution against numerical noise because it exploits the whole shape of the input signal and not only the peak values. A suitable and accurate multibody model of railway vehicle and flexible track has also been developed by the authors to test the algorithm when experimental data are not available and in general, under any operating conditions (fundamental to verify the algorithm accuracy and robustness). The railway vehicle chosen as benchmark is the Manchester Wagon, modelled in the Adams VI-Rail environment. The physical model of the flexible track has been implemented in the Matlab and Comsol Multiphysics environments. A simulation campaign has been performed to verify the performance and the robustness of the proposed algorithm, and the results are quite promising. The research has been carried out in cooperation with Ansaldo STS and ECM Spa.

  8. Evaluation of algorithms for registry-based detection of acute myocardial infarction following percutaneous coronary intervention

    PubMed Central

    Egholm, Gro; Madsen, Morten; Thim, Troels; Schmidt, Morten; Christiansen, Evald Høj; Bøtker, Hans Erik; Maeng, Michael

    2016-01-01

    Background Registry-based monitoring of the safety and efficacy of interventions in patients with ischemic heart disease requires validated algorithms. Objective We aimed to evaluate algorithms to identify acute myocardial infarction (AMI) in the Danish National Patient Registry following percutaneous coronary intervention (PCI). Methods Patients enrolled in clinical drug-eluting stent studies at the Department of Cardiology, Aarhus University Hospital, Denmark, from January 2006 to August 2012 were included. These patients were evaluated for ischemic events, including AMI, during follow-up using an end point committee adjudication of AMI as reference standard. Results Of 5,719 included patients, 285 patients suffered AMI within a mean follow-up time of 3 years after stent implantation. An AMI discharge diagnosis (primary or secondary) from any acute or elective admission had a sensitivity of 95%, a specificity of 93%, and a positive predictive value of 42%. Restriction to acute admissions decreased the sensitivity to 94% but increased the specificity to 98% and the positive predictive value to 73%. Further restriction to include only AMI as primary diagnosis from acute admissions decreased the sensitivity further to 82%, but increased the specificity to 99% and the positive predictive value to 81%. Restriction to patients admitted to hospitals with a coronary angiography catheterization laboratory increased the positive predictive value to 87%. Conclusion Algorithms utilizing additional information from the Danish National Patient Registry yield different sensitivities, specificities, and predictive values in registry-based detection of AMI following PCI. We were able to identify AMI following PCI with moderate-to-high validity. However, the choice of algorithm will depend on the specific study purpose. PMID:27799822

  9. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  10. A Prototype Hail Detection Algorithm and Hail Climatology Developed with the Advanced Microwave Sounding Unit (AMSU)

    NASA Technical Reports Server (NTRS)

    Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald

    2015-01-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other

  11. Simple and Robust Realtime QRS Detection Algorithm Based on Spatiotemporal Characteristic of the QRS Complex.

    PubMed

    Kim, Jinkwon; Shin, Hangsik

    2016-01-01

    The purpose of this research is to develop an intuitive and robust realtime QRS detection algorithm based on the physiological characteristics of the electrocardiogram waveform. The proposed algorithm finds the QRS complex based on the dual criteria of the amplitude and duration of QRS complex. It consists of simple operations, such as a finite impulse response filter, differentiation or thresholding without complex and computational operations like a wavelet transformation. The QRS detection performance is evaluated by using both an MIT-BIH arrhythmia database and an AHA ECG database (a total of 435,700 beats). The sensitivity (SE) and positive predictivity value (PPV) were 99.85% and 99.86%, respectively. According to the database, the SE and PPV were 99.90% and 99.91% in the MIT-BIH database and 99.84% and 99.84% in the AHA database, respectively. The result of the noisy environment test using record 119 from the MIT-BIH database indicated that the proposed method was scarcely affected by noise above 5 dB SNR (SE = 100%, PPV > 98%) without the need for an additional de-noising or back searching process. PMID:26943949

  12. Detectability limitations with 3-D point reconstruction algorithms using digital radiography

    SciTech Connect

    Lindgren, Erik

    2015-03-31

    The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.

  13. Implementing a shadow detection algorithm for synthetic vision systems in reconfigurable hardware

    NASA Astrophysics Data System (ADS)

    Ladeji-Osias, Jumoke; Theobalds, Andre; Nare, Otsebele; Wandji, Theirry; Scott, Craig; Nyarko, Kofi

    2006-05-01

    The integrity monitor for synthetic vision systems provides pilots with a consistency check between stored Digital Elevation Models (DEM) and real-time sensor data. This paper discusses the implementation of the Shadow Detection and Extraction (SHADE) algorithm in reconfigurable hardware to increase the efficiency of the design. The SHADE algorithm correlates data from a weather radar and DEM to determine occluded regions of the flight path terrain. This process of correlating the weather radar and DEM data occurs in two parallel threads which are then fed into a disparity checker. The DEM thread is broken up into four main sub-functions: 1) synchronization and translation of GPS coordinates of aircraft to the weather radar, 2) mapping range bins to coordinates and computing depression angles, 3) mapping state assignments to range bins, and 4) shadow region edge detection. This correlation must be done in realtime; therefore, a hardware implementation is ideal due to the amount of data that is to be processed. The hardware of choice is the field programmable gate array because of programmability, reusability, and computational ability. Assigning states to each range bin is the most computationally intensive process and it is implemented as a finite state machine (FSM). Results of this work are focused on the implementation of the FSM.

  14. Small sample training and test selection method for optimized anomaly detection algorithms in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2012-01-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques provide an avenue to select robust settings capable of operating consistently across a large variety of image scenes. Many researchers in this area are faced with a paucity of data. Unfortunately, there are no data splitting methods for model validation of datasets with small sample sizes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research has developed a framework for optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. We have developed method for selecting hyperspectral image training and test subsets that yields consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. The small sample training and test selection method is contrasted with randomly selected training sets as well as training sets chosen from the CADEX and DUPLEX algorithms for the well known Reed-Xiaoli anomaly detector.

  15. Object detection utilizing a linear retrieval algorithm for thermal infrared imagery

    SciTech Connect

    Ramsey, M.S.

    1996-11-01

    Thermal infrared (TIR) spectroscopy and remote sensing have been proven to be extremely valuable tools for mineralogic discrimination. One technique for sub-pixel detection and data reduction, known as a spectral retrieval or unmixing algorithm, will prove useful in the analysis of data from scheduled TIR orbital instruments. This study represents the first quantitative attempt to identify the limits of the model, specifically concentrating on the TIR. The algorithm was written and applied to laboratory data, testing the effects of particle size, noise, and multiple endmembers, then adapted to operate on airborne Thermal Infrared Multispectral Scanner data of the Kelso Dunes, CA, Meteor Crater, AZ, and Medicine Lake Volcano, CA. Results indicate that linear spectral unmixmg can produce accurate endmember detection to within an average of 5%. In addition, the effects of vitrification and textural variations were modeled. The ability to predict mineral or rock abundances becomes extremely useful in tracking sediment transport, decertification, and potential hazard assessment in remote volcanic regions. 26 refs., 3 figs.

  16. Development of Algorithms and Error Analyses for the Short Baseline Lightning Detection and Ranging System

    NASA Technical Reports Server (NTRS)

    Starr, Stanley O.

    1998-01-01

    NASA, at the John F. Kennedy Space Center (KSC), developed and operates a unique high-precision lightning location system to provide lightning-related weather warnings. These warnings are used to stop lightning- sensitive operations such as space vehicle launches and ground operations where equipment and personnel are at risk. The data is provided to the Range Weather Operations (45th Weather Squadron, U.S. Air Force) where it is used with other meteorological data to issue weather advisories and warnings for Cape Canaveral Air Station and KSC operations. This system, called Lightning Detection and Ranging (LDAR), provides users with a graphical display in three dimensions of 66 megahertz radio frequency events generated by lightning processes. The locations of these events provide a sound basis for the prediction of lightning hazards. This document provides the basis for the design approach and data analysis for a system of radio frequency receivers to provide azimuth and elevation data for lightning pulses detected simultaneously by the LDAR system. The intent is for this direction-finding system to correct and augment the data provided by LDAR and, thereby, increase the rate of valid data and to correct or discard any invalid data. This document develops the necessary equations and algorithms, identifies sources of systematic errors and means to correct them, and analyzes the algorithms for random error. This data analysis approach is not found in the existing literature and was developed to facilitate the operation of this Short Baseline LDAR (SBLDAR). These algorithms may also be useful for other direction-finding systems using radio pulses or ultrasonic pulse data.

  17. A multilevel system of algorithms for detecting and isolating signals in a background of noise

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.; Tsoy, K. A.

    1978-01-01

    Signal information is processed with the help of algorithms, and then on the basis of such processing, a part of the information is subjected to further processing with the help of more precise algorithms. Such a system of algorithms is studied, a comparative evaluation of a series of lower level algorithms is given, and the corresponding algorithms of higher level are characterized.

  18. Development of novel algorithm and real-time monitoring ambulatory system using Bluetooth module for fall detection in the elderly.

    PubMed

    Hwang, J Y; Kang, J M; Jang, Y W; Kim, H

    2004-01-01

    Novel algorithm and real-time ambulatory monitoring system for fall detection in elderly people is described. Our system is comprised of accelerometer, tilt sensor and gyroscope. For real-time monitoring, we used Bluetooth. Accelerometer measures kinetic force, tilt sensor and gyroscope estimates body posture. Also, we suggested algorithm using signals which obtained from the system attached to the chest for fall detection. To evaluate our system and algorithm, we experimented on three people aged over 26 years. The experiment of four cases such as forward fall, backward fall, side fall and sit-stand was repeated ten times and the experiment in daily life activity was performed one time to each subject. These experiments showed that our system and algorithm could distinguish between falling and daily life activity. Moreover, the accuracy of fall detection is 96.7%. Our system is especially adapted for long-time and real-time ambulatory monitoring of elderly people in emergency situation.

  19. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  20. Performing target specific band reduction using artificial neural networks and assessment of its efficacy using various target detection algorithms

    NASA Astrophysics Data System (ADS)

    Yadav, Deepti; Arora, M. K.; Tiwari, K. C.; Ghosh, J. K.

    2016-04-01

    Hyperspectral imaging is a powerful tool in the field of remote sensing and has been used for many applications like mineral detection, detection of landmines, target detection etc. Major issues in target detection using HSI are spectral variability, noise, small size of the target, huge data dimensions, high computation cost, complex backgrounds etc. Many of the popular detection algorithms do not work for difficult targets like small, camouflaged etc. and may result in high false alarms. Thus, target/background discrimination is a key issue and therefore analyzing target's behaviour in realistic environments is crucial for the accurate interpretation of hyperspectral imagery. Use of standard libraries for studying target's spectral behaviour has limitation that targets are measured in different environmental conditions than application. This study uses the spectral data of the same target which is used during collection of the HSI image. This paper analyze spectrums of targets in a way that each target can be spectrally distinguished from a mixture of spectral data. Artificial neural network (ANN) has been used to identify the spectral range for reducing data and further its efficacy for improving target