Science.gov

Sample records for aberration detection algorithms

  1. Recombinant Temporal Aberration Detection Algorithms for Enhanced Biosurveillance

    PubMed Central

    Murphy, Sean Patrick; Burkom, Howard

    2008-01-01

    Objective Broadly, this research aims to improve the outbreak detection performance and, therefore, the cost effectiveness of automated syndromic surveillance systems by building novel, recombinant temporal aberration detection algorithms from components of previously developed detectors. Methods This study decomposes existing temporal aberration detection algorithms into two sequential stages and investigates the individual impact of each stage on outbreak detection performance. The data forecasting stage (Stage 1) generates predictions of time series values a certain number of time steps in the future based on historical data. The anomaly measure stage (Stage 2) compares features of this prediction to corresponding features of the actual time series to compute a statistical anomaly measure. A Monte Carlo simulation procedure is then used to examine the recombinant algorithms’ ability to detect synthetic aberrations injected into authentic syndromic time series. Results New methods obtained with procedural components of published, sometimes widely used, algorithms were compared to the known methods using authentic datasets with plausible stochastic injected signals. Performance improvements were found for some of the recombinant methods, and these improvements were consistent over a range of data types, outbreak types, and outbreak sizes. For gradual outbreaks, the WEWD MovAvg7+WEWD Z-Score recombinant algorithm performed best; for sudden outbreaks, the HW+WEWD Z-Score performed best. Conclusion This decomposition was found not only to yield valuable insight into the effects of the aberration detection algorithms but also to produce novel combinations of data forecasters and anomaly measures with enhanced detection performance. PMID:17947614

  2. Aberration features in directional dark matter detection

    SciTech Connect

    Bozorgnia, Nassim; Gelmini, Graciela B.; Gondolo, Paolo E-mail: gelmini@physics.ucla.edu

    2012-08-01

    The motion of the Earth around the Sun causes an annual change in the magnitude and direction of the arrival velocity of dark matter particles on Earth, in a way analogous to aberration of stellar light. In directional detectors, aberration of weakly interacting massive particles (WIMPs) modulates the pattern of nuclear recoil directions in a way that depends on the orbital velocity of the Earth and the local galactic distribution of WIMP velocities. Knowing the former, WIMP aberration can give information on the latter, besides being a curious way of confirming the revolution of the Earth and the extraterrestrial provenance of WIMPs. While observing the full aberration pattern requires extremely large exposures, we claim that the annual variation of the mean recoil direction or of the event counts over specific solid angles may be detectable with moderately large exposures. For example, integrated counts over Galactic hemispheres separated by planes perpendicular to Earth's orbit would modulate annually, resulting in Galactic Hemisphere Annual Modulations (GHAM) with amplitudes larger than the usual non-directional annual modulation.

  3. Optimizing chromatic aberration calibration using a novel genetic algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Yi-Chin; Liu, Tung-Kuan; MacDonald, John; Chou, Jyh-Horng; Wu, Bo-Wen; Tsai, Hsien-Lin; Chang, En-Hao

    2006-10-01

    Advances in digitalized image optics has increased the importance of chromatic aberration. The axial and lateral chromatic aberrations of an optical lens depends on the choice of optical glass. Based on statistics from glass companies worldwide, more than 300 optical glasses have been developed for commercial purposes. However, the complexity of optical systems makes it extremely difficult to obtain the right solution to eliminate small chromatic aberration. Even the damped least-squares technique, which is a ray-tracing-based method, is limited owing to its inability to identify an enhanced optical system configuration. Alternatively, this study instead attempts to eliminate even negligible axial and lateral colour aberration by using algorithms involving the theories of geometric optics in triplet lens, binary and real encoding, multiple dynamic crossover and random gene mutation techniques.

  4. Detecting independent and recurrent copy number aberrations using interval graphs

    PubMed Central

    Wu, Hsin-Ta; Hajirasouliha, Iman; Raphael, Benjamin J.

    2014-01-01

    Motivation: Somatic copy number aberrations (SCNAs) are frequent in cancer genomes, but many of these are random, passenger events. A common strategy to distinguish functional aberrations from passengers is to identify those aberrations that are recurrent across multiple samples. However, the extensive variability in the length and position of SCNAs makes the problem of identifying recurrent aberrations notoriously difficult. Results: We introduce a combinatorial approach to the problem of identifying independent and recurrent SCNAs, focusing on the key challenging of separating the overlaps in aberrations across individuals into independent events. We derive independent and recurrent SCNAs as maximal cliques in an interval graph constructed from overlaps between aberrations. We efficiently enumerate all such cliques, and derive a dynamic programming algorithm to find an optimal selection of non-overlapping cliques, resulting in a very fast algorithm, which we call RAIG (Recurrent Aberrations from Interval Graphs). We show that RAIG outperforms other methods on simulated data and also performs well on data from three cancer types from The Cancer Genome Atlas (TCGA). In contrast to existing approaches that employ various heuristics to select independent aberrations, RAIG optimizes a well-defined objective function. We show that this allows RAIG to identify rare aberrations that are likely functional, but are obscured by overlaps with larger passenger aberrations. Availability: http://compbio.cs.brown.edu/software. Contact: braphael@brown.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931984

  5. Corneal wavefront aberration measurements to detect keratoconus patients.

    PubMed

    Gobbe, Marine; Guillon, Michel

    2005-06-01

    Distortions of the cornea associated with keratoconus are the manifestations of increased level, compared to normal, of higher order aberrations. It is highly relevant to use corneal aberrations to describe the optical quality of the cornea. The aim of the current study was to develop a keratoconus detection scheme based on Zernike coefficients. The results showed that the best detector to differentiate between suspected keratoconus and normal corneas was vertical coma (Z(3)(-1)) (specificity 71.9%; sensitivity 89.3%). The results demonstrated an improved use of videokeratoscope as a diagnostic tool for keratoconus detection that can be used for all types of videokeratoscopes. PMID:16318836

  6. A shifting level model algorithm that identifies aberrations in array-CGH data.

    PubMed

    Magi, Alberto; Benelli, Matteo; Marseglia, Giuseppina; Nannetti, Genni; Scordo, Maria Rosaria; Torricelli, Francesca

    2010-04-01

    Array comparative genomic hybridization (aCGH) is a microarray technology that allows one to detect and map genomic alterations. The goal of aCGH analysis is to identify the boundaries of the regions where the number of DNA copies changes (breakpoint identification) and then to label each region as loss, neutral, or gain (calling). In this paper, we introduce a new algorithm, based on the shifting level model (SLM), with the aim of locating regions with different means of the log(2) ratio in genomic profiles obtained from aCGH data. We combine the SLM algorithm with the CGHcall calling procedure and compare their performances with 5 state-of-the-art methods. When dealing with synthetic data, our method outperforms the other 5 algorithms in detecting the change in the number of DNA copies in the most challenging situations. For real aCGH data, SLM is able to locate all the cytogenetically mapped aberrations giving a smaller number of false-positive breakpoints than the compared methods. The application of the SLM algorithm is not limited to aCGH data. Our approach can also be used for the analysis of several emerging experimental strategies such as high-resolution tiling array. PMID:19948744

  7. Wavelet periodicity detection algorithms

    NASA Astrophysics Data System (ADS)

    Benedetto, John J.; Pfander, Goetz E.

    1998-10-01

    This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.

  8. Chromatic aberration elimination for digital rear projection television L-type lens by genetic algorithms

    NASA Astrophysics Data System (ADS)

    Fang, Yi-Chin; Liu, Tung-Kuan; Wu, Bo-Wen; Chou, Jyh-Horng; MacDonald, John

    2008-05-01

    Following the development of a digitalized image optics system, chromatic aberration has become increasingly important especially in lateral color aberration. For rear projection television L-type lens, chromatic aberration plays the significant role because it is easily seen when facing bright screen. Basically, the elimination of axial chromatic and lateral color aberration for an optical lens depends on the choice of optical glass. DLS (damped least squares), a Ray-tracing-based method, is limited, owing to its inability to identify an enhanced optical system configuration. Genetic algorithms were applied to so-called global optimization but unfortunately so far the results show little success. Additionally, L-type optics with aspherical surface might complicate optimization due to being nonlinear response during optimization. As an alternative, this research proposes a new feasible chromatic aberration optimization process by using algorithms involving theories of geometric optics in a lens, real encoding, multiple dynamic crossover and random gene mutation techniques. In this research, rear projection television lens with aspherical surface and L-type lens are mainly discussed. Results and conclusions show that attempts to eliminate difficult axial and lateral color aberration are successful.

  9. Detecting Outliers in Factor Analysis Using the Forward Search Algorithm

    ERIC Educational Resources Information Center

    Mavridis, Dimitris; Moustaki, Irini

    2008-01-01

    In this article we extend and implement the forward search algorithm for identifying atypical subjects/observations in factor analysis models. The forward search has been mainly developed for detecting aberrant observations in regression models (Atkinson, 1994) and in multivariate methods such as cluster and discriminant analysis (Atkinson, Riani,…

  10. Surface contribution to high-order aberrations using the Aldis therem and Andersen's algorithms

    NASA Astrophysics Data System (ADS)

    Ortiz-Estardante, A.; Cornejo-Rodriguez, Alejandro

    1990-07-01

    Formulae and computer programs were developed for surface contributions to high order aberrations coefficients using the Aldis theorem and Andersen algor ithms for a symmetr ical optical system. 2. THEORY Using the algorithms developed by T. B. Andersent which allow to calculate the high order aberrations coefficients of an optical system. We were able to obtain a set of equations for the contributions of each surface of a centered optical system to such aberration coefficiets by using the equations of Andersen and the so called Aldis theorem 3. COMPUTER PROGRAMS AND EXAMPLES. The study for the case of an object at infinite has been completed and more recently the object to finite distance case has been also finished . The equations have been properly programed for the two above mentioned situations . Some typical designs of optical systems will be presented and some advantages and disadvantages of the developed formulae and method will be discussed. 4. CONCLUSIONS The algorithm developed by Anderson has a compact notation and structure which is suitable for computers. Using those results obtained by Anderson together with the Aldis theorem a set of equations were derived and programmed for the surface contributions of a centered optical system to high order aberrations. 5. REFERENCES 1. T . B. Andersen App 1. Opt. 3800 (1980) 2. A. Cox A system of Optical Design Focal Press 1964 18 / SPIE

  11. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  12. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  13. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  14. Aberrance Detection Powers of the BW and Person-Fit Indices

    ERIC Educational Resources Information Center

    Huang, Tsai-Wei

    2012-01-01

    The study compared the aberrance detection powers of the BW person-fit indices with other group-based indices (SCI, MCI, NCI, and Wc&Bs) and item response theory based (IRT-based) indices (OUTFITz, INFITz, ECI2z, ECI4z, and lz). Four kinds of comparative conditions, including content category (CC), types of aberrance (AT), severity of aberrance…

  15. Detection Algorithms: FFT vs. KLT

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    Given the vast distances between the stars, we can anticipate that any received SETI signal will be exceedingly weak. How can we hope to extract (or even recognize) such signals buried well beneath the natural background noise with which they must compete? This chapter analyzes, compares, and contrasts the two dominant signal detection algorithms used by SETI scientists to recognize extremely weak candidate signals.

  16. An algorithm for classifying tumors based on genomic aberrations and selecting representative tumor models

    PubMed Central

    2010-01-01

    Background Cancer is a heterogeneous disease caused by genomic aberrations and characterized by significant variability in clinical outcomes and response to therapies. Several subtypes of common cancers have been identified based on alterations of individual cancer genes, such as HER2, EGFR, and others. However, cancer is a complex disease driven by the interaction of multiple genes, so the copy number status of individual genes is not sufficient to define cancer subtypes and predict responses to treatments. A classification based on genome-wide copy number patterns would be better suited for this purpose. Method To develop a more comprehensive cancer taxonomy based on genome-wide patterns of copy number abnormalities, we designed an unsupervised classification algorithm that identifies genomic subgroups of tumors. This algorithm is based on a modified genomic Non-negative Matrix Factorization (gNMF) algorithm and includes several additional components, namely a pilot hierarchical clustering procedure to determine the number of clusters, a multiple random initiation scheme, a new stop criterion for the core gNMF, as well as a 10-fold cross-validation stability test for quality assessment. Result We applied our algorithm to identify genomic subgroups of three major cancer types: non-small cell lung carcinoma (NSCLC), colorectal cancer (CRC), and malignant melanoma. High-density SNP array datasets for patient tumors and established cell lines were used to define genomic subclasses of the diseases and identify cell lines representative of each genomic subtype. The algorithm was compared with several traditional clustering methods and showed improved performance. To validate our genomic taxonomy of NSCLC, we correlated the genomic classification with disease outcomes. Overall survival time and time to recurrence were shown to differ significantly between the genomic subtypes. Conclusions We developed an algorithm for cancer classification based on genome-wide patterns

  17. Wire Detection Algorithms for Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.

    2002-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning

  18. Reconstruction of the wavefront aberration from real interferometric data using a hybrid evolutionary optimization algorithm with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Sánchez-Escobar, Juan Jaime; Barbosa Santillán, Liliana Ibeth

    2015-09-01

    This paper describes the use of a hybrid evolutionary optimization algorithm (HEOA) for computing the wavefront aberration from real interferometric data. By finding the near-optimal solution to an optimization problem, this algorithm calculates the Zernike polynomial expansion coefficients from a Fizeau interferogram, showing the validity for the reconstruction of the wavefront aberration. The proposed HEOA incorporates the advantages of both a multimember evolution strategy and locally weighted linear regression in order to minimize an objective function while avoiding premature convergence to a local minimum. The numerical results demonstrate that our HEOA is robust for analyzing real interferograms degraded by noise.

  19. Detection of recurrent cytogenetic aberrations in multiple myeloma: A comparison between MLPA and iFISH

    PubMed Central

    Yu, Zhen; Li, Fei; Yi, Shuhua; Ai, Xiaofei; Qin, Xiaoqi; Feng, Xiaoyan; Zhou, Wen; Xu, Yan; Li, Zengjun; Hao, Mu; Sui, Weiwei; Deng, Shuhui; Acharya, Chirag; Zhao, Yaozhong; Ru, Kun; Qiu, Lugui; An, Gang

    2015-01-01

    Multiple myeloma (MM) is a genetically heterogeneous disease with diverse clinical characteristics and outcomes. Recently, multiplex ligation-dependent probe amplification (MLPA) has emerged as an effective and robust method for the detection of cytogenetic aberrations in MM patients. In the present study, MLPA analysis was applied to analyze cytogenetics of CD138 tumor cells of 59 MM samples, and its result was compared, retrospectively, with the interphase fluorescence in situ hybridization (iFISH) data. We firstly established the normal range of each of the 42 diagnostic probes using healthy donor samples. A total of 151 aberrations were detected in 59 patient samples, and 49/59 cases (83.1%) harbored at least one copy number variation. Overall, 0–7 aberrations were detected per case using MLPA, indicating the heterogeneity and complexity of MM cytogenetics. We showed the high efficiency of MLPA and the high congruency of the two methods to assess cytogenetic aberrations. Considering that MLPA analysis is not reliable when the aberration only exits in a small population of tumor cells, it is essential to use both MLPA and iFISH as complementary techniques for the diagnosis of MM. PMID:26416457

  20. Aberrant crypt foci: detection, gene abnormalities, and clinical usefulness.

    PubMed

    Takayama, Tetsuji; Miyanishi, Koji; Hayashi, Tsuyoshi; Kukitsu, Takehiro; Takanashi, Kunihiro; Ishiwatari, Hirotoshi; Kogawa, Takahiro; Abe, Tomoyuki; Niitsu, Yoshiro

    2005-07-01

    Human aberrant crypt foci (ACF) were first identified as lesions consisting of large thick crypts in colonic mucosa of surgical specimens after staining with methylene blue. Previously we succeeded in identifying ACF by using magnifying endoscopy and analyzed the number, size, and dysplastic features of ACF in normal controls and patients with adenoma or cancer patients. On the basis of these analyses, we strongly suggested that ACF, particularly dysplastic ACF, are precursor lesions of the adenoma-carcinoma sequence in humans. In most sporadic ACF, K-ras mutations were positive, but APC mutations were negative irrespective of nondysplastic or dysplastic features. Conversely, in most ACF from familial adenomatous polyposis patients, APC mutations were positive but K-ras mutations were negative. These results may suggest that the molecular mechanism of sporadic colon carcinogenesis is not necessarily the same as that of familial adenomatous polyposis. It was shown that ACF acquired resistance to apoptosis induced by bile salts, whereas normal colonic epithelial cells are turning over consistently by apoptosis. This apoptosis resistance was closely associated with glutathione S-transferase P1-1 expression. One of the most important clinical applications of ACF observation with magnifying endoscopy is its use as a target lesion for chemoprevention. Because ACF are tiny lesions, they should be eradicated during a short time by administration of chemopreventive agents. In fact, we performed an open chemopreventive trial of sulindac and found that the number of ACF was reduced markedly in 2 months. We currently are proceeding with a randomized double-blind trial targeting ACF. PMID:16012995

  1. Is there a best hyperspectral detection algorithm?

    NASA Astrophysics Data System (ADS)

    Manolakis, D.; Lockwood, R.; Cooley, T.; Jacobson, J.

    2009-05-01

    A large number of hyperspectral detection algorithms have been developed and used over the last two decades. Some algorithms are based on highly sophisticated mathematical models and methods; others are derived using intuition and simple geometrical concepts. The purpose of this paper is threefold. First, we discuss the key issues involved in the design and evaluation of detection algorithms for hyperspectral imaging data. Second, we present a critical review of existing detection algorithms for practical hyperspectral imaging applications. Finally, we argue that the "apparent" superiority of sophisticated algorithms with simulated data or in laboratory conditions, does not necessarily translate to superiority in real-world applications.

  2. Orbital objects detection algorithm using faint streaks

    NASA Astrophysics Data System (ADS)

    Tagawa, Makoto; Yanagisawa, Toshifumi; Kurosaki, Hirohisa; Oda, Hiroshi; Hanada, Toshiya

    2016-02-01

    This study proposes an algorithm to detect orbital objects that are small or moving at high apparent velocities from optical images by utilizing their faint streaks. In the conventional object-detection algorithm, a high signal-to-noise-ratio (e.g., 3 or more) is required, whereas in our proposed algorithm, the signals are summed along the streak direction to improve object-detection sensitivity. Lower signal-to-noise ratio objects were detected by applying the algorithm to a time series of images. The algorithm comprises the following steps: (1) image skewing, (2) image compression along the vertical axis, (3) detection and determination of streak position, (4) searching for object candidates using the time-series streak-position data, and (5) selecting the candidate with the best linearity and reliability. Our algorithm's ability to detect streaks with signals weaker than the background noise was confirmed using images from the Australia Remote Observatory.

  3. An efficient parallel termination detection algorithm

    SciTech Connect

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.

  4. Aberrant Learning Achievement Detection Based on Person-Fit Statistics in Personalized e-Learning Systems

    ERIC Educational Resources Information Center

    Liu, Ming-Tsung; Yu, Pao-Ta

    2011-01-01

    A personalized e-learning service provides learning content to fit learners' individual differences. Learning achievements are influenced by cognitive as well as non-cognitive factors such as mood, motivation, interest, and personal styles. This paper proposes the Learning Caution Indexes (LCI) to detect aberrant learning patterns. The philosophy…

  5. Robust Detection of Examinees with Aberrant Answer Changes

    ERIC Educational Resources Information Center

    Belov, Dmitry I.

    2015-01-01

    The statistical analysis of answer changes (ACs) has uncovered multiple testing irregularities on large-scale assessments and is now routinely performed at testing organizations. However, AC data has an uncertainty caused by technological or human factors. Therefore, existing statistics (e.g., number of wrong-to-right ACs) used to detect examinees…

  6. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  7. Using Automated Image Analysis Algorithms to Distinguish Normal, Aberrant, and Degenerate Mitotic Figures Induced by Eg5 Inhibition.

    PubMed

    Bigley, Alison L; Klein, Stephanie K; Davies, Barry; Williams, Leigh; Rudmann, Daniel G

    2016-07-01

    Modulation of the cell cycle may underlie the toxicologic or pharmacologic responses of a potential therapeutic agent and contributes to decisions on its preclinical and clinical safety and efficacy. The descriptive and quantitative assessment of normal, aberrant, and degenerate mitotic figures in tissue sections is an important end point characterizing the effect of xenobiotics on the cell cycle. Historically, pathologists used manual counting and special staining visualization techniques such as immunohistochemistry for quantification of normal, aberrant, and degenerate mitotic figures. We designed an automated image analysis algorithm for measuring these mitotic figures in hematoxylin and eosin (H&E)-stained sections. Algorithm validation methods used data generated from a subcutaneous human transitional cell carcinoma xenograft model in nude rats treated with the cell cycle inhibitor Eg5. In these studies, we scanned and digitized H&E-stained xenografts and applied a complex ruleset of sequential mathematical filters and shape discriminators for classification of cell populations demonstrating normal, aberrant, or degenerate mitotic figures. The resultant classification system enabled the representations of three identifiable degrees of morphological change associated with tumor differentiation and compound effects. The numbers of mitotic figure variants and mitotic indices data generated corresponded to a manual assessment by a pathologist and supported automated algorithm verification and application for both efficacy and toxicity studies. PMID:26936079

  8. Exoplanet detection with simultaneous spectral differential imaging: effects of out-of-pupil-plane optical aberrations

    SciTech Connect

    Marois, C; Phillion, D W; Macintosh, B

    2006-05-02

    Imaging faint companions (exoplanets and brown dwarfs) around nearby stars is currently limited by speckle noise. To efficiently attenuate this noise, a technique called simultaneous spectral differential imaging (SSDI) can be used. This technique consists of acquiring simultaneously images of the field of view in several adjacent narrow bands and in combining these images to suppress speckles. Simulations predict that SSDI can achieve, with the acquisition of three wavelengths, speckle noise attenuation of several thousands. These simulations are usually performed using the Fraunhofer approximation, i.e. considering that all aberrations are located in the pupil plane. We have performed wavefront propagation simulations to evaluate how out-of-pupil-plane aberrations affect SSDI speckle noise attenuation performance. The Talbot formalism is used to give a physical insight of the problem; results are confirmed using a proper wavefront propagation algorithm. We will show that near-focal-plane aberrations can significantly reduce SSDI speckle noise attenuation performance at several {lambda}/D separation. It is also shown that the Talbot effect correctly predicts the PSF chromaticity. Both differential atmospheric refraction effects and the use of a coronagraph will be discussed.

  9. Image change detection algorithms: a systematic survey.

    PubMed

    Radke, Richard J; Andra, Srinivas; Al-Kofahi, Omar; Roysam, Badrinath

    2005-03-01

    Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer. PMID:15762326

  10. WISECONDOR: detection of fetal aberrations from shallow sequencing maternal plasma based on a within-sample comparison scheme

    PubMed Central

    Straver, Roy; Sistermans, Erik A.; Holstege, Henne; Visser, Allerdien; Oudejans, Cees B. M.; Reinders, Marcel J. T.

    2014-01-01

    Genetic disorders can be detected by prenatal diagnosis using Chorionic Villus Sampling, but the 1:100 chance to result in miscarriage restricts the use to fetuses that are suspected to have an aberration. Detection of trisomy 21 cases noninvasively is now possible owing to the upswing of next-generation sequencing (NGS) because a small percentage of fetal DNA is present in maternal plasma. However, detecting other trisomies and smaller aberrations can only be realized using high-coverage NGS, making it too expensive for routine practice. We present a method, WISECONDOR (WIthin-SamplE COpy Number aberration DetectOR), which detects small aberrations using low-coverage NGS. The increased detection resolution was achieved by comparing read counts within the tested sample of each genomic region with regions on other chromosomes that behave similarly in control samples. This within-sample comparison avoids the need to re-sequence control samples. WISECONDOR correctly identified all T13, T18 and T21 cases while coverages were as low as 0.15–1.66. No false positives were identified. Moreover, WISECONDOR also identified smaller aberrations, down to 20 Mb, such as del(13)(q12.3q14.3), +i(12)(p10) and i(18)(q10). This shows that prevalent fetal copy number aberrations can be detected accurately and affordably by shallow sequencing maternal plasma. WISECONDOR is available at bioinformatics.tudelft.nl/wisecondor. PMID:24170809

  11. Detection of suspicious activity using incremental outlier detection algorithms

    NASA Astrophysics Data System (ADS)

    Pokrajac, D.; Reljin, N.; Pejcic, N.; Vance, T.; McDaniel, S.; Lazarevic, A.; Chang, H. J.; Choi, J. Y.; Miezianko, R.

    2009-08-01

    Detection of unusual trajectories of moving objects can help in identifying suspicious activity on convoy routes and thus reduce casualties caused by improvised explosive devices. In this paper, using video imagery we compare efficiency of various techniques for incremental outlier detection on detecting unusual trajectories on simulated and real-life data obtained from SENSIAC database. Incremental outlier detection algorithms that we consider in this paper include incremental Support Vector Classifier (incSVC), incremental Local Outlier Factor (incLOF) algorithm and incremental Connectivity Outlier Factor (incCOF) algorithm. Our experiments performed on ground truth trajectory data indicate that incremental LOF algorithm can provide better detection of unusual trajectories in comparison to other examined techniques.

  12. Negative Selection Algorithm for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.

  13. Performance analysis of cone detection algorithms.

    PubMed

    Mariotti, Letizia; Devaney, Nicholas

    2015-04-01

    Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758

  14. Memetic algorithm for community detection in networks.

    PubMed

    Gong, Maoguo; Fu, Bao; Jiao, Licheng; Du, Haifeng

    2011-11-01

    Community structure is one of the most important properties in networks, and community detection has received an enormous amount of attention in recent years. Modularity is by far the most used and best known quality function for measuring the quality of a partition of a network, and many community detection algorithms are developed to optimize it. However, there is a resolution limit problem in modularity optimization methods. In this study, a memetic algorithm, named Meme-Net, is proposed to optimize another quality function, modularity density, which includes a tunable parameter that allows one to explore the network at different resolutions. Our proposed algorithm is a synergy of a genetic algorithm with a hill-climbing strategy as the local search procedure. Experiments on computer-generated and real-world networks show the effectiveness and the multiresolution ability of the proposed method. PMID:22181467

  15. Lightning detection and exposure algorithms for smartphones

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Shao, Xiaopeng; Wang, Lin; Su, Laili; Huang, Yining

    2015-05-01

    This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.

  16. An Analytic Framework for Space–Time Aberrancy Detection in Public Health Surveillance Data

    PubMed Central

    Buckeridge, David L; Musen, Mark A; Switzer, Paul; Crubézy, Monica

    2003-01-01

    Public health surveillance is changing in response to concerns about bioterrorism, which have increased the pressure for early detection of epidemics. Rapid detection necessitates following multiple non-specific indicators and accounting for spatial structure. No single analytic method can meet all of these requirements for all data sources and all surveillance goals. Analytic methods must be selected and configured to meet a surveillance goal, but there are no uniform criteria to guide the selection and configuration process. In this paper, we describe work towards the development of an analytic framework for space–time aberrancy detection in public health surveillance data. The framework decomposes surveillance analysis into sub-tasks and identifies knowledge that can facilitate selection of methods to accomplish sub-tasks. PMID:14728146

  17. Obstacle Detection Algorithms for Rotorcraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.; Huang, Ying; Narasimhamurthy, Anand; Pande, Nitin; Ahumada, Albert (Technical Monitor)

    2001-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.

  18. Detecting Danger: The Dendritic Cell Algorithm

    NASA Astrophysics Data System (ADS)

    Greensmith, Julie; Aickelin, Uwe; Cayzer, Steve

    The "Dendritic Cell Algorithm" (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, an abstract model of dendritic cell (DC) behavior is developed and subsequently used to form an algorithm—the DCA. The abstraction process was facilitated through close collaboration with laboratory-based immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population-based algorithm, with each agent in the system represented as an "artificial DC". Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter, the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of port scans and botnets, where it has produced impressive results with relatively low rates of false positives.

  19. Rare Event Detection Algorithm Of Water Quality

    NASA Astrophysics Data System (ADS)

    Ungs, M. J.

    2011-12-01

    A novel method is presented describing the development and implementation of an on-line water quality event detection algorithm. An algorithm was developed to distinguish between normal variation in water quality parameters and changes in these parameters triggered by the presence of contaminant spikes. Emphasis is placed on simultaneously limiting the number of false alarms (which are called false positives) that occur and the number of misses (called false negatives). The problem of excessive false alarms is common to existing change detection algorithms. EPA's standard measure of evaluation for event detection algorithms is to have a false alarm rate of less than 0.5 percent and a false positive rate less than 2 percent (EPA 817-R-07-002). A detailed description of the algorithm's development is presented. The algorithm is tested using historical water quality data collected by a public water supply agency at multiple locations and using spiking contaminants developed by the USEPA, Water Security Division. The water quality parameters of specific conductivity, chlorine residual, total organic carbon, pH, and oxidation reduction potential are considered. Abnormal data sets are generated by superimposing water quality changes on the historical or baseline data. Eddies-ET has defined reaction expressions which specify how the peak or spike concentration of a particular contaminant affects each water quality parameter. Nine default contaminants (Eddies-ET) were previously derived from pipe-loop tests performed at EPA's National Homeland Security Research Center (NHSRC) Test and Evaluation (T&E) Facility. A contaminant strength value of approximately 1.5 is considered to be a significant threat. The proposed algorithm has been able to achieve a combined false alarm rate of less than 0.03 percent for both false positives and for false negatives using contaminant spikes of strength 2 or more.

  20. A collision detection algorithm for telerobotic arms

    NASA Technical Reports Server (NTRS)

    Tran, Doan Minh; Bartholomew, Maureen Obrien

    1991-01-01

    The telerobotic manipulator's collision detection algorithm is described. Its applied structural model of the world environment and template representation of objects is evaluated. Functional issues that are required for the manipulator to operate in a more complex and realistic environment are discussed.

  1. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  2. Detection algorithm for multiple rice seeds images

    NASA Astrophysics Data System (ADS)

    Cheng, F.; Ying, Y. B.

    2006-10-01

    The objective of this research is to develop a digital image analysis algorithm for detection of multiple rice seeds images. The rice seeds used for this study involved a hybrid rice seed variety. Images of multiple rice seeds were acquired with a machine vision system for quality inspection of bulk rice seeds, which is designed to inspect rice seeds on a rotating disk with a CCD camera. Combining morphological operations and parallel processing gave improvements in accuracy, and a reduction in computation time. Using image features selected based on classification ability; a highly acceptable defects classification was achieved when the algorithm was implemented for all the samples to test the adaptability.

  3. On Dijkstra's Algorithm for Deadlock Detection

    NASA Astrophysics Data System (ADS)

    Li, Youming; Greca, Ardian; Harris, James

    We study a classical problem in operating systems concerning deadlock detection for systems with reusable resources. The elegant Dijkstra's algorithm utilizes simple data structures, but it has the cost of quadratic dependence on the number of the processes. Our goal is to reduce the cost in an optimal way without losing the simplicity of the data structures. More specifically, we present a graph-free and almost optimal algorithm with the cost of linear dependence on the number of the processes, when the number of resources is fixed and when the units of requests for resources are bounded by constants.

  4. Comparative study of skew detection algorithms

    NASA Astrophysics Data System (ADS)

    Amin, Adnan; Fischer, Stephen; Parkinson, Anthony F.; Shiu, Ricky

    1996-10-01

    Document image processing has become an increasingly important technology in the automation of office documentation tasks. Automatic document scanners such as text readers and optical character recognition systems are an essential component of systems capable of those tasks. One of the problems in this field is that the document to be read is not always placed correctly on a flat-bed scanner. This means that the document may be skewed on the scanner bed, resulting in a skewed image. This skew has a detrimental effect on document analysis, document understanding, and character segmentation and recognition. Consequently, detecting the skew of a document image and correcting it are important issues in realizing a practical document reader. We describe a new algorithm for skew detection. We then compare the performance and results of this skew detection algorithm to other published methods from O'Gorman, Hinds, Le, Baird, Postl, and Akiyama. Finally, we discuss the theory of skew detection and the different approaches taken to solve the problem of skew in documents. The skew correction algorithm we propose has been shown to be extremely fast, with run times averaging under 0.25 CPU seconds to calculate the angle on a DEC 5000/20 workstation.

  5. Genome aberrations in canine mammary carcinomas and their detection in cell-free plasma DNA.

    PubMed

    Beck, Julia; Hennecke, Silvia; Bornemann-Kolatzki, Kirsten; Urnovitz, Howard B; Neumann, Stephan; Ströbel, Philipp; Kaup, Franz-Josef; Brenig, Bertram; Schütz, Ekkehard

    2013-01-01

    Mammary tumors are the most frequent cancers in female dogs exhibiting a variety of histopathological differences. There is lack of knowledge about the genomes of these common dog tumors. Five tumors of three different histological subtypes were evaluated. Massive parallel sequencing (MPS) was performed in comparison to the respective somatic genome of each animal. Copy number and structural aberrations were validated using droplet digital PCR (ddPCR). Using mate-pair sequencing chromosomal aneuploidies were found in two tumors, frequent smaller deletions were found in one, inter-chromosomal fusions in one other, whereas one tumor was almost normal. These aberrations affect several known cancer associated genes such as cMYC, and KIT. One common deletion of the proximal end of CFA27, harboring the tumor suppressor gene PFDN5 was detected in four tumors. Using ddPCR, this deletion was validated and detected in 50% of tumors (N = 20). Breakpoint specific dPCRs were established for four tumors and tumor specific cell-free DNA (cfDNA) was detected in the plasma. In one animal tumor-specific cfDNA was found >1 year after surgery, attributable to a lung metastasis. Paired-end sequencing proved that copy-number imbalances of the tumor are reflected by the cfDNA. This report on chromosomal instability of canine mammary cancers reveals similarities to human breast cancers as well as special canine alterations. This animal model provides a framework for using MPS for screening for individual cancer biomarkers with cost effective confirmation and monitoring using ddPCR. The possibility exists that ddPCR can be expanded to screening for common cancer related variants. PMID:24098698

  6. Genome Aberrations in Canine Mammary Carcinomas and Their Detection in Cell-Free Plasma DNA

    PubMed Central

    Beck, Julia; Hennecke, Silvia; Bornemann-Kolatzki, Kirsten; Urnovitz, Howard B.; Neumann, Stephan; Ströbel, Philipp; Kaup, Franz-Josef; Brenig, Bertram; Schütz, Ekkehard

    2013-01-01

    Mammary tumors are the most frequent cancers in female dogs exhibiting a variety of histopathological differences. There is lack of knowledge about the genomes of these common dog tumors. Five tumors of three different histological subtypes were evaluated. Massive parallel sequencing (MPS) was performed in comparison to the respective somatic genome of each animal. Copy number and structural aberrations were validated using droplet digital PCR (ddPCR). Using mate-pair sequencing chromosomal aneuploidies were found in two tumors, frequent smaller deletions were found in one, inter-chromosomal fusions in one other, whereas one tumor was almost normal. These aberrations affect several known cancer associated genes such as cMYC, and KIT. One common deletion of the proximal end of CFA27, harboring the tumor suppressor gene PFDN5 was detected in four tumors. Using ddPCR, this deletion was validated and detected in 50% of tumors (N = 20). Breakpoint specific dPCRs were established for four tumors and tumor specific cell-free DNA (cfDNA) was detected in the plasma. In one animal tumor-specific cfDNA was found >1 year after surgery, attributable to a lung metastasis. Paired-end sequencing proved that copy-number imbalances of the tumor are reflected by the cfDNA. This report on chromosomal instability of canine mammary cancers reveals similarities to human breast cancers as well as special canine alterations. This animal model provides a framework for using MPS for screening for individual cancer biomarkers with cost effective confirmation and monitoring using ddPCR. The possibility exists that ddPCR can be expanded to screening for common cancer related variants. PMID:24098698

  7. Detection of Cheating by Decimation Algorithm

    NASA Astrophysics Data System (ADS)

    Yamanaka, Shogo; Ohzeki, Masayuki; Decelle, Aurélien

    2015-02-01

    We expand the item response theory to study the case of "cheating students" for a set of exams, trying to detect them by applying a greedy algorithm of inference. This extended model is closely related to the Boltzmann machine learning. In this paper we aim to infer the correct biases and interactions of our model by considering a relatively small number of sets of training data. Nevertheless, the greedy algorithm that we employed in the present study exhibits good performance with a few number of training data. The key point is the sparseness of the interactions in our problem in the context of the Boltzmann machine learning: the existence of cheating students is expected to be very rare (possibly even in real world). We compare a standard approach to infer the sparse interactions in the Boltzmann machine learning to our greedy algorithm and we find the latter to be superior in several aspects.

  8. Differential Search Algorithm Based Edge Detection

    NASA Astrophysics Data System (ADS)

    Gunen, M. A.; Civicioglu, P.; Beşdok, E.

    2016-06-01

    In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.

  9. Network Algorithms for Detection of Radiation Sources

    SciTech Connect

    Rao, Nageswara S; Brooks, Richard R; Wu, Qishi

    2014-01-01

    In support of national defense, Domestic Nuclear Detection Office s (DNDO) Intelligent Radiation Sensor Systems (IRSS) program supported the development of networks of radiation counters for detecting, localizing and identifying low-level, hazardous radiation sources. Industry teams developed the first generation of such networks with tens of counters, and demonstrated several of their capabilities in indoor and outdoor characterization tests. Subsequently, these test measurements have been used in algorithm replays using various sub-networks of counters. Test measurements combined with algorithm outputs are used to extract Key Measurements and Benchmark (KMB) datasets. We present two selective analyses of these datasets: (a) a notional border monitoring scenario that highlights the benefits of a network of counters compared to individual detectors, and (b) new insights into the Sequential Probability Ratio Test (SPRT) detection method, which lead to its adaptations for improved detection. Using KMB datasets from an outdoor test, we construct a notional border monitoring scenario, wherein twelve 2 *2 NaI detectors are deployed on the periphery of 21*21meter square region. A Cs-137 (175 uCi) source is moved across this region, starting several meters from outside and finally moving away. The measurements from individual counters and the network were processed using replays of a particle filter algorithm developed under IRSS program. The algorithm outputs from KMB datasets clearly illustrate the benefits of combining measurements from all networked counters: the source was detected before it entered the region, during its trajectory inside, and until it moved several meters away. When individual counters are used for detection, the source was detected for much shorter durations, and sometimes was missed in the interior region. The application of SPRT for detecting radiation sources requires choosing the detection threshold, which in turn requires a source strength

  10. Benchmark graphs for testing community detection algorithms

    NASA Astrophysics Data System (ADS)

    Lancichinetti, Andrea; Fortunato, Santo; Radicchi, Filippo

    2008-10-01

    Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis.

  11. Simple probabilistic algorithm for detecting community structure

    NASA Astrophysics Data System (ADS)

    Ren, Wei; Yan, Guiying; Liao, Xiaoping; Xiao, Lan

    2009-03-01

    With the growing number of available social and biological networks, the problem of detecting the network community structure is becoming more and more important which acts as the first step to analyze these data. The community structure is generally regarded as that nodes in the same community tend to have more edges and less if they are in different communities. We propose a simple probabilistic algorithm for detecting community structure which employs expectation-maximization (SPAEM). We also give a criterion based on the minimum description length to identify the optimal number of communities. SPAEM can detect overlapping nodes and handle weighted networks. It turns out to be powerful and effective by testing simulation data and some widely known data sets.

  12. An improved algorithm for wildfire detection

    NASA Astrophysics Data System (ADS)

    Nakau, K.

    2010-12-01

    Satellite information of wild fire location has strong demands from society. Therefore, Understanding such demands is quite important to consider what to improve the wild fire detection algorithm. Interviews and considerations imply that the most important improvements are geographical resolution of the wildfire product and classification of fire; smoldering or flaming. Discussion with fire service agencies are performed with fire service agencies in Alaska and fire service volunteer groups in Indonesia. Alaska Fire Service (AFS) makes 3D-map overlaid by fire location every morning. Then, this 3D-map is examined by leaders of fire service teams to decide their strategy to fighting against wild fire. Especially, firefighters of both agencies seek the best walk path to approach the fire. Because of mountainous landscape, geospatial resolution is quite important for them. For example, walking in bush for 1km, as same as one pixel of fire product, is very tough for firefighters. Also, in case of remote wild fire, fire service agencies utilize satellite information to decide when to have a flight observation to confirm the status; expanding, flaming, smoldering or out. Therefore, it is also quite important to provide the classification of fire; flaming or smoldering. Not only the aspect of disaster management, wildfire emits huge amount of carbon into atmosphere as much as one quarter to one half of CO2 by fuel combustion (IPCC AR4). Reduction of the CO2 emission by human caused wildfire is important. To estimate carbon emission from wildfire, special resolution is quite important. To improve sensitivity of wild fire detection, author adopts radiance based wildfire detection. Different from the existing brightness temperature approach, we can easily consider reflectance of background land coverage. Especially for GCOM-C1/SGLI, band to detect fire with 250m resolution is 1.6μm wavelength. In this band, we have much more sunlight reflection. Therefore, we need to

  13. Photon Counting Using Edge-Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Gin, Jonathan W.; Nguyen, Danh H.; Farr, William H.

    2010-01-01

    New applications such as high-datarate, photon-starved, free-space optical communications require photon counting at flux rates into gigaphoton-per-second regimes coupled with subnanosecond timing accuracy. Current single-photon detectors that are capable of handling such operating conditions are designed in an array format and produce output pulses that span multiple sample times. In order to discern one pulse from another and not to overcount the number of incoming photons, a detection algorithm must be applied to the sampled detector output pulses. As flux rates increase, the ability to implement such a detection algorithm becomes difficult within a digital processor that may reside within a field-programmable gate array (FPGA). Systems have been developed and implemented to both characterize gigahertz bandwidth single-photon detectors, as well as process photon count signals at rates into gigaphotons per second in order to implement communications links at SCPPM (serial concatenated pulse position modulation) encoded data rates exceeding 100 megabits per second with efficiencies greater than two bits per detected photon. A hardware edge-detection algorithm and corresponding signal combining and deserialization hardware were developed to meet these requirements at sample rates up to 10 GHz. The photon discriminator deserializer hardware board accepts four inputs, which allows for the ability to take inputs from a quadphoton counting detector, to support requirements for optical tracking with a reduced number of hardware components. The four inputs are hardware leading-edge detected independently. After leading-edge detection, the resultant samples are ORed together prior to deserialization. The deserialization is performed to reduce the rate at which data is passed to a digital signal processor, perhaps residing within an FPGA. The hardware implements four separate analog inputs that are connected through RF connectors. Each analog input is fed to a high-speed 1

  14. Interpreting Chromosome Aberration Spectra

    NASA Technical Reports Server (NTRS)

    Levy, Dan; Reeder, Christopher; Loucas, Bradford; Hlatky, Lynn; Chen, Allen; Cornforth, Michael; Sachs, Rainer

    2007-01-01

    Ionizing radiation can damage cells by breaking both strands of DNA in multiple locations, essentially cutting chromosomes into pieces. The cell has enzymatic mechanisms to repair such breaks; however, these mechanisms are imperfect and, in an exchange process, may produce a large-scale rearrangement of the genome, called a chromosome aberration. Chromosome aberrations are important in killing cells, during carcinogenesis, in characterizing repair/misrepair pathways, in retrospective radiation biodosimetry, and in a number of other ways. DNA staining techniques such as mFISH ( multicolor fluorescent in situ hybridization) provide a means for analyzing aberration spectra by examining observed final patterns. Unfortunately, an mFISH observed final pattern often does not uniquely determine the underlying exchange process. Further, resolution limitations in the painting protocol sometimes lead to apparently incomplete final patterns. We here describe an algorithm for systematically finding exchange processes consistent with any observed final pattern. This algorithm uses aberration multigraphs, a mathematical formalism that links the various aspects of aberration formation. By applying a measure to the space of consistent multigraphs, we will show how to generate model-specific distributions of aberration processes from mFISH experimental data. The approach is implemented by software freely available over the internet. As a sample application, we apply these algorithms to an aberration data set, obtaining a distribution of exchange cycle sizes, which serves to measure aberration complexity. Estimating complexity, in turn, helps indicate how damaging the aberrations are and may facilitate identification of radiation type in retrospective biodosimetry.

  15. Obstacle Detection Algorithms for Aircraft Navigation: Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.

  16. Algorithm for parametric community detection in networks.

    PubMed

    Bettinelli, Andrea; Hansen, Pierre; Liberti, Leo

    2012-07-01

    Modularity maximization is extensively used to detect communities in complex networks. It has been shown, however, that this method suffers from a resolution limit: Small communities may be undetectable in the presence of larger ones even if they are very dense. To alleviate this defect, various modifications of the modularity function have been proposed as well as multiresolution methods. In this paper we systematically study a simple model (proposed by Pons and Latapy [Theor. Comput. Sci. 412, 892 (2011)] and similar to the parametric model of Reichardt and Bornholdt [Phys. Rev. E 74, 016110 (2006)]) with a single parameter α that balances the fraction of within community edges and the expected fraction of edges according to the configuration model. An exact algorithm is proposed to find optimal solutions for all values of α as well as the corresponding successive intervals of α values for which they are optimal. This algorithm relies upon a routine for exact modularity maximization and is limited to moderate size instances. An agglomerative hierarchical heuristic is therefore proposed to address parametric modularity detection in large networks. At each iteration the smallest value of α for which it is worthwhile to merge two communities of the current partition is found. Then merging is performed and the data are updated accordingly. An implementation is proposed with the same time and space complexity as the well-known Clauset-Newman-Moore (CNM) heuristic [Phys. Rev. E 70, 066111 (2004)]. Experimental results on artificial and real world problems show that (i) communities are detected by both exact and heuristic methods for all values of the parameter α; (ii) the dendrogram summarizing the results of the heuristic method provides a useful tool for substantive analysis, as illustrated particularly on a Les Misérables data set; (iii) the difference between the parametric modularity values given by the exact method and those given by the heuristic is

  17. Algorithmic sensor failure detection on passive antenna arrays

    NASA Astrophysics Data System (ADS)

    Chun, Joohwan; Luk, Franklin T.

    1991-12-01

    We present an algorithm that can detect and isolate a single passive antenna failure under the assumption of slowly time varying signal sources. Our failure detection algorithm recursively computes an eigenvalue decomposition of the covariance of the "syndrome" vector. The sensor failure is detected using the largest eigenvalue, and the faulty sensor is located using the corresponding eigenvector. The algorithm can also be used in conjunction with existing singular value decomposition or orthogonal triangularization based recursive antenna array processing methods.

  18. Dual-Byte-Marker Algorithm for Detecting JFIF Header

    NASA Astrophysics Data System (ADS)

    Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat

    The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.

  19. An effective algorithm for radar dim moving target detection

    NASA Astrophysics Data System (ADS)

    Luo, Qian; Wang, Yanfei

    2009-10-01

    The detection and tracking of dim moving targets in very low signal-to-noise ratio (SNR) environment has been a difficult problem in radar signal processing. For low SNR moving targets detection, a new improved dynamic programming algorithm based on track-before-detection method is presented. This new algorithm integrates energy along target moving tracks according to target moving parameter information. This process substitutes the exhaustive search by a feasible algorithm. The simulation confirms that this algorithm, with high computational efficiency, is feasible, and can effectively estimate trajectories of dim closing moving targets. The process has also been shown to give an increase in detection.

  20. A vehicle detection algorithm based on deep belief network.

    PubMed

    Wang, Hai; Cai, Yingfeng; Chen, Long

    2014-01-01

    Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN) is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets. PMID:24959617

  1. Comparison of 3-D Multi-Lag Cross-Correlation and Speckle Brightness Aberration Correction Algorithms on Static and Moving Targets

    PubMed Central

    Ivancevich, Nikolas M.; Dahl, Jeremy J.; Smith, Stephen W.

    2010-01-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively. PMID:19942503

  2. Effect of chromosome size on aberration levels caused by gamma radiation as detected by fluorescence in situ hybridization.

    PubMed

    Pandita, T K; Gregoire, V; Dhingra, K; Hittelman, W N

    1994-01-01

    Fluorescence in situ hybridization (FISH) is a powerful technique for detecting genomic alterations at the chromosome level. To study the effect of chromosome size on aberration formation, we used FISH to detect initial damage in individual prematurely condensed chromosomes (PCC) of gamma-irradiated G0 human cells. A linear dose response for breaks and a nonlinear dose response for exchanges was obtained using a chromosome 1-specific probe. FISH detected more chromosome 1 breaks than expected from DNA based extrapolation of Giemsa stained PCC preparations. The discrepancy in the number of breaks detected by the two techniques raised questions as to whether Giemsa staining and FISH differ in their sensitivities for detecting breaks, or is chromosome 1 uniquely sensitive to gamma-radiation. To address the question of technique sensitivity, we determined total chromosome damage by FISH using a total genomic painting probe; the results obtained from Giemsa-staining and FISH were nearly identical. To determine if chromosome 1 was uniquely sensitive, we selected four different sized chromosomes for paint probes and scored them for gamma-ray induced aberrations. In these studies the number of chromosome breaks per unit DNA increased linearly with an increase in the DNA content of the chromosomes. However, the number of exchanges per unit of DNA did not increase with an increase in chromosome size. This suggests that chromosome size may influence the levels of aberrations observed. Extrapolation from measurements of a single chromosome's damage to the whole genome requires that the relative DNA content of the measured chromosome be considered. PMID:8039428

  3. Health Monitoring System for the SSME-fault detection algorithms

    NASA Technical Reports Server (NTRS)

    Tulpule, S.; Galinaitis, W. S.

    1990-01-01

    A Health Monitoring System (HMS) Framework for the Space Shuttle Main Engine (SSME) has been developed by United Technologies Corporation (UTC) for the NASA Lewis Research Center. As part of this effort, fault detection algorithms have been developed to detect the SSME faults with sufficient time to shutdown the engine. These algorithms have been designed to provide monitoring coverage during the startup, mainstage and shutdown phases of the SSME operation. The algorithms have the capability to detect multiple SSME faults, and are based on time series, regression and clustering techniques. This paper presents a discussion of candidate algorithms suitable for fault detection followed by a description of the algorithms selected for implementation in the HMS and the results of testing these algorithms with the SSME test stand data.

  4. Parallel algorithms for line detection on a mesh

    SciTech Connect

    Guerra, C.; Hambrusch, S. . Dept. of Computer Science)

    1989-02-01

    The authors consider the problems of detecting lines in an n x n image on an n x n mesh of processors. They present two new and efficient parallel algorithms which detect lines by performing a Hough transform. Both algorithms perform only simple data movement operations over relatively short distances.

  5. A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Munoz, Cesar

    2015-01-01

    In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.

  6. Characterising a holographic modal phase mask for the detection of ocular aberrations

    NASA Astrophysics Data System (ADS)

    Corbett, A. D.; Leyva, D. Gil; Diaz-Santana, L.; Wilkinson, T. D.; Zhong, J. J.

    2005-12-01

    The accurate measurement of the double-pass ocular wave front has been shown to have a broad range of applications from LASIK surgery to adaptively corrected retinal imaging. The ocular wave front can be accurately described by a small number of Zernike circle polynomials. The modal wave front sensor was first proposed by Neil et al. and allows the coefficients of the individual Zernike modes to be measured directly. Typically the aberrations measured with the modal sensor are smaller than those seen in the ocular wave front. In this work, we investigated a technique for adapting a modal phase mask for the sensing of the ocular wave front. This involved extending the dynamic range of the sensor by increasing the pinhole size to 2.4mm and optimising the mask bias to 0.75λ. This was found to decrease the RMS error by up to a factor of three for eye-like aberrations with amplitudes up to 0.2μm. For aberrations taken from a sample of real-eye measurements a 20% decrease in the RMS error was observed.

  7. Comparison of chromosomal aberrations detected by fluorescence in situ hybridization with clinical parameters, DNA ploidy and Ki 67 expression in renal cell carcinoma.

    PubMed Central

    Wada, Y.; Igawa, M.; Shiina, H.; Shigeno, K.; Yokogi, H.; Urakami, S.; Yoneda, T.; Maruyama, R.

    1998-01-01

    To evaluate the significance of chromosomal aberrations in renal cell carcinoma, fluorescence in situ hybridization (FISH) was used to determine its prevalence and correlation with clinical parameters of malignancy. In addition, correlation of chromosomal aberration with Ki 67 expression was analysed. We performed FISH with chromosome-specific DNA probes, and the signal number of pericentromeric sequences on chromosomes 3, 7, 9 and 17 was detected within interphase nuclei in touch preparations from tumour specimen. The incidence of loss of chromosome 3 was significantly higher than those of chromosomes 7, 9 and 17 (P < 0.001, P = 0.03 and P < 0.001 respectively). Hyperdiploid aberration of chromosomes 3 and 17 was significantly correlated with tumour stage (P = 0.03, P = 0.02 respectively), whereas hyperdiploid aberration of chromosome 9 was associated with nuclear grade (P = 0.04). Disomy of chromosome 7 was correlated with venous involvement (P = 0.04). Ki 67 expression was significantly associated with hyperdiploid aberration of chromosome 17 (P = 0.01), but not with aberration of chromosome 3. There was a significant relationship between hyperdiploid aberration of chromosome 7 and Ki 67 expression (P = 0.01). In conclusions, gain of chromosome 17 may reflect tumour development, and aberration of chromosome 7 may affect metastatic potential of malignancy, whereas loss of chromosome 3 may be associated with early stage of tumour development in renal cell carcinoma. PMID:9667682

  8. Genetic optimization of the HSTAMIDS landmine detection algorithm

    NASA Astrophysics Data System (ADS)

    Konduri, Ravi K.; Solomon, Geoff Z.; DeJong, Keith; Duvoisin, Herbert A.; Bartosz, Elizabeth E.

    2004-09-01

    CyTerra's dual sensor HSTAMIDS system has demonstrated exceptional landmine detection capabilities in extensive government-run field tests. Further optimization of the highly successful PentAD-class algorithms for Humanitarian Demining (HD) use (to enhance detection (Pd) and to lower the false alarm rate (FAR)) may be possible. PentAD contains several input parameters, making such optimization computationally intensive. Genetic algorithm techniques, which formerly provided substantial improvement in the detection performance of the metal detector sensor algorithm alone, have been applied to optimize the numerical values of the dual-sensor algorithm parameters. Genetic algorithm techniques have also been applied to choose among several sub-models and fusion techniques to potentially train the HSTAMIDS HD system in new ways. In this presentation we discuss the performance of the resulting algorithm as applied to field data.

  9. Community detection based on modularity and an improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shang, Ronghua; Bai, Jing; Jiao, Licheng; Jin, Chao

    2013-03-01

    Complex networks are widely applied in every aspect of human society, and community detection is a research hotspot in complex networks. Many algorithms use modularity as the objective function, which can simplify the algorithm. In this paper, a community detection method based on modularity and an improved genetic algorithm (MIGA) is put forward. MIGA takes the modularity Q as the objective function, which can simplify the algorithm, and uses prior information (the number of community structures), which makes the algorithm more targeted and improves the stability and accuracy of community detection. Meanwhile, MIGA takes the simulated annealing method as the local search method, which can improve the ability of local search by adjusting the parameters. Compared with the state-of-art algorithms, simulation results on computer-generated and four real-world networks reflect the effectiveness of MIGA.

  10. A new algorithmic approach for fingers detection and identification

    NASA Astrophysics Data System (ADS)

    Mubashar Khan, Arslan; Umar, Waqas; Choudhary, Taimoor; Hussain, Fawad; Haroon Yousaf, Muhammad

    2013-03-01

    Gesture recognition is concerned with the goal of interpreting human gestures through mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Hand gesture detection in a real time environment, where the time and memory are important issues, is a critical operation. Hand gesture recognition largely depends on the accurate detection of the fingers. This paper presents a new algorithmic approach to detect and identify fingers of human hand. The proposed algorithm does not depend upon the prior knowledge of the scene. It detects the active fingers and Metacarpophalangeal (MCP) of the inactive fingers from an already detected hand. Dynamic thresholding technique and connected component labeling scheme are employed for background elimination and hand detection respectively. Algorithm proposed a new approach for finger identification in real time environment keeping the memory and time constraint as low as possible.

  11. Machine learning algorithms for damage detection: Kernel-based approaches

    NASA Astrophysics Data System (ADS)

    Santos, Adam; Figueiredo, Eloi; Silva, M. F. M.; Sales, C. S.; Costa, J. C. W. A.

    2016-02-01

    This paper presents four kernel-based algorithms for damage detection under varying operational and environmental conditions, namely based on one-class support vector machine, support vector data description, kernel principal component analysis and greedy kernel principal component analysis. Acceleration time-series from an array of accelerometers were obtained from a laboratory structure and used for performance comparison. The main contribution of this study is the applicability of the proposed algorithms for damage detection as well as the comparison of the classification performance between these algorithms and other four ones already considered as reliable approaches in the literature. All proposed algorithms revealed to have better classification performance than the previous ones.

  12. An improved edge detection algorithm for depth map inpainting

    NASA Astrophysics Data System (ADS)

    Chen, Weihai; Yue, Haosong; Wang, Jianhua; Wu, Xingming

    2014-04-01

    Three-dimensional (3D) measurement technology has been widely used in many scientific and engineering areas. The emergence of Kinect sensor makes 3D measurement much easier. However the depth map captured by Kinect sensor has some invalid regions, especially at object boundaries. These missing regions should be filled firstly. This paper proposes a depth-assisted edge detection algorithm and improves existing depth map inpainting algorithm using extracted edges. In the proposed algorithm, both color image and raw depth data are used to extract initial edges. Then the edges are optimized and are utilized to assist depth map inpainting. Comparative experiments demonstrate that the proposed edge detection algorithm can extract object boundaries and inhibit non-boundary edges caused by textures on object surfaces. The proposed depth inpainting algorithm can predict missing depth values successfully and has better performance than existing algorithm around object boundaries.

  13. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  14. Research on the filtering algorithm in speed and position detection of maglev trains.

    PubMed

    Dai, Chunhui; Long, Zhiqiang; Xie, Yunde; Xue, Song

    2011-01-01

    This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS) train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train's structure, the permanent magnet electrodynamic suspension (EDS) train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD) and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally. PMID:22164012

  15. Research on the Filtering Algorithm in Speed and Position Detection of Maglev Trains

    PubMed Central

    Dai, Chunhui; Long, Zhiqiang; Xie, Yunde; Xue, Song

    2011-01-01

    This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS) train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train’s structure, the permanent magnet electrodynamic suspension (EDS) train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD) and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally. PMID:22164012

  16. Improvement and implementation for Canny edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  17. An Improved QRS Wave Group Detection Algorithm and Matlab Implementation

    NASA Astrophysics Data System (ADS)

    Zhang, Hongjun

    This paper presents an algorithm using Matlab software to detect QRS wave group of MIT-BIH ECG database. First of all the noise in ECG be Butterworth filtered, and then analysis the ECG signal based on wavelet transform to detect the parameters of the principle of singularity, more accurate detection of the QRS wave group was achieved.

  18. A Comparative Analysis of Community Detection Algorithms on Artificial Networks.

    PubMed

    Yang, Zhao; Algesheimer, René; Tessone, Claudio J

    2016-01-01

    Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms' computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm's predicting power and the effective computing time. PMID:27476470

  19. Aquarius RFI Detection and Mitigation Algorithm: Assessment and Examples

    NASA Technical Reports Server (NTRS)

    Le Vine, David M.; De Matthaeis, P.; Ruf, Christopher S.; Chen, D. D.

    2013-01-01

    Aquarius is an L-band radiometer system designed to map sea surface salinity from space. This is a sensitive measurement, and protection from radio frequency interference (RFI) is important for success. An initial look at the performance of the Aquarius RFI detection and mitigation algorithm is reported together with examples of the global distribution of RFI at the L-band. To protect against RFI, Aquarius employs rapid sampling (10 ms) and a "glitch" detection algorithm that looks for outliers among the samples. Samples identified as RFI are removed, and the remainder is averaged to produce an RFI-free signal for the salinity retrieval algorithm. The RFI detection algorithm appears to work well over the ocean with modest rates for false alarms (5%) and missed detection. The global distribution of RFI coincides well with population centers and is consistent with observations reported by the Soil Moisture and Ocean Salinity mission.

  20. A SAR ATR algorithm based on coherent change detection

    SciTech Connect

    Harmony, D.W.

    2000-12-01

    This report discusses an automatic target recognition (ATR) algorithm for synthetic aperture radar (SAR) imagery that is based on coherent change detection techniques. The algorithm relies on templates created from training data to identify targets. Objects are identified or rejected as targets by comparing their SAR signatures with templates using the same complex correlation scheme developed for coherent change detection. Preliminary results are presented in addition to future recommendations.

  1. ETD: an extended time delay algorithm for ventricular fibrillation detection.

    PubMed

    Kim, Jungyoon; Chu, Chao-Hsien

    2014-01-01

    Ventricular fibrillation (VF) is the most serious type of heart attack which requires quick detection and first aid to improve patients' survival rates. To be most effective in using wearable devices for VF detection, it is vital that the detection algorithms be accurate, robust, reliable and computationally efficient. Previous studies and our experiments both indicate that the time-delay (TD) algorithm has a high reliability for separating sinus rhythm (SR) from VF and is resistant to variable factors, such as window size and filtering method. However, it fails to detect some VF cases. In this paper, we propose an extended time-delay (ETD) algorithm for VF detection and conduct experiments comparing the performance of ETD against five good VF detection algorithms, including TD, using the popular Creighton University (CU) database. Our study shows that (1) TD and ETD outperform the other four algorithms considered and (2) with the same sensitivity setting, ETD improves upon TD in three other quality measures for up to 7.64% and in terms of aggregate accuracy, the ETD algorithm shows an improvement of 2.6% of the area under curve (AUC) compared to TD. PMID:25571480

  2. A baseline algorithm for face detection and tracking in video

    NASA Astrophysics Data System (ADS)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  3. Fast algorithm for detecting community structure in networks

    NASA Astrophysics Data System (ADS)

    Newman, M. E.

    2004-06-01

    Many networks display community structure—groups of vertices within which connections are dense but between which they are sparser—and sensitive computer algorithms have in recent years been developed for detecting this structure. These algorithms, however, are computationally demanding, which limits their application to small networks. Here we describe an algorithm which gives excellent results when tested on both computer-generated and real-world networks and is much faster, typically thousands of times faster, than previous algorithms. We give several example applications, including one to a collaboration network of more than 50 000 physicists.

  4. Detecting cosmic strings in the CMB with the Canny algorithm

    SciTech Connect

    Amsel, Stephen; Brandenberger, Robert H; Berger, Joshua E-mail: jb454@cornell.edu

    2008-04-15

    Line discontinuities in cosmic microwave background anisotropy maps are a distinctive prediction of models with cosmic strings. These signatures are visible in anisotropy maps with good angular resolution and should be identifiable using edge-detection algorithms. One such algorithm is the Canny algorithm. We study the potential of this algorithm to pick out the line discontinuities generated by cosmic strings. By applying the algorithm to small-scale microwave anisotropy maps generated from theoretical models with and without cosmic strings, we find that, given an angular resolution of several minutes of arc, cosmic strings can be detected down to a limit of the mass per unit length of the string which is one order of magnitude lower than the current upper bounds.

  5. Community detection algorithms: A comparative analysis

    NASA Astrophysics Data System (ADS)

    Lancichinetti, Andrea; Fortunato, Santo

    2009-11-01

    Uncovering the community structure exhibited by real networks is a crucial step toward an understanding of complex systems that goes beyond the local organization of their constituents. Many algorithms have been proposed so far, but none of them has been subjected to strict tests to evaluate their performance. Most of the sporadic tests performed so far involved small networks with known community structure and/or artificial graphs with a simplified structure, which is very uncommon in real systems. Here we test several methods against a recently introduced class of benchmark graphs, with heterogeneous distributions of degree and community size. The methods are also tested against the benchmark by Girvan and Newman [Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)] and on random graphs. As a result of our analysis, three recent algorithms introduced by Rosvall and Bergstrom [Proc. Natl. Acad. Sci. U.S.A. 104, 7327 (2007); Proc. Natl. Acad. Sci. U.S.A. 105, 1118 (2008)], Blondel [J. Stat. Mech.: Theory Exp. (2008), P10008], and Ronhovde and Nussinov [Phys. Rev. E 80, 016109 (2009)] have an excellent performance, with the additional advantage of low computational complexity, which enables one to analyze large systems.

  6. Algorithm for Detecting Significant Locations from Raw GPS Data

    NASA Astrophysics Data System (ADS)

    Kami, Nobuharu; Enomoto, Nobuyuki; Baba, Teruyuki; Yoshikawa, Takashi

    We present a fast algorithm for probabilistically extracting significant locations from raw GPS data based on data point density. Extracting significant locations from raw GPS data is the first essential step of algorithms designed for location-aware applications. Assuming that a location is significant if users spend a certain time around that area, most current algorithms compare spatial/temporal variables, such as stay duration and a roaming diameter, with given fixed thresholds to extract significant locations. However, the appropriate threshold values are not clearly known in priori and algorithms with fixed thresholds are inherently error-prone, especially under high noise levels. Moreover, for N data points, they are generally O(N 2) algorithms since distance computation is required. We developed a fast algorithm for selective data point sampling around significant locations based on density information by constructing random histograms using locality sensitive hashing. Evaluations show competitive performance in detecting significant locations even under high noise levels.

  7. A Motion Detection Algorithm Using Local Phase Information

    PubMed Central

    Lazar, Aurel A.; Ukani, Nikul H.; Zhou, Yiyin

    2016-01-01

    Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882

  8. Line matching for automatic change detection algorithm

    NASA Astrophysics Data System (ADS)

    Dhollande, Jérôme; Monnin, David; Gond, Laetitia; Cudel, Christophe; Kohler, Sophie; Dieterlen, Alain

    2012-06-01

    During foreign operations, Improvised Explosive Devices (IEDs) are one of major threats that soldiers may unfortunately encounter along itineraries. Based on a vehicle-mounted camera, we propose an original approach by image comparison to detect signicant changes on these roads. The classic 2D-image registration techniques do not take into account parallax phenomena. The consequence is that the misregistration errors could be detected as changes. According to stereovision principles, our automatic method compares intensity proles along corresponding epipolar lines by extrema matching. An adaptive space warping compensates scale dierence in 3D-scene. When the signals are matched, the signal dierence highlights changes which are marked in current video.

  9. Novel automatic eye detection and tracking algorithm

    NASA Astrophysics Data System (ADS)

    Ghazali, Kamarul Hawari; Jadin, Mohd Shawal; Jie, Ma; Xiao, Rui

    2015-04-01

    The eye is not only one of the most complex but also the most important sensory organ of the human body. Eye detection and eye tracking are basement and hot issue in image processing. A non-invasive eye location and eye tracking is promising for hands-off gaze-based human-computer interface, fatigue detection, instrument control by paraplegic patients and so on. For this purpose, an innovation work frame is proposed to detect and tracking eye in video sequence in this paper. The contributions of this work can be divided into two parts. The first contribution is that eye filters were trained which can detect eye location efficiently and accurately without constraints on the background and skin colour. The second contribution is that a framework of tracker based on sparse representation and LK optic tracker were built which can track eye without constraint on eye status. The experimental results demonstrate the accuracy aspects and the real-time applicability of the proposed approach.

  10. A two-level detection algorithm for optical fiber vibration

    NASA Astrophysics Data System (ADS)

    Bi, Fukun; Ren, Xuecong; Qu, Hongquan; Jiang, Ruiqing

    2015-09-01

    Optical fiber vibration is detected by the coherent optical time domain reflection technique. In addition to the vibration signals, the reflected signals include clutters and noises, which lead to a high false alarm rate. The "cell averaging" constant false alarm rate algorithm has a high computing speed, but its detection performance will be declined in nonhomogeneous environments such as multiple targets. The "order statistics" constant false alarm rate algorithm has a distinct advantage in multiple target environments, but it has a lower computing speed. An intelligent two-level detection algorithm is presented based on "cell averaging" constant false alarm rate and "order statistics" constant false alarm rate which work in serial way, and the detection speed of "cell averaging" constant false alarm rate and performance of "order statistics" constant false alarm rate are conserved, respectively. Through the adaptive selection, the "cell averaging" is applied in homogeneous environments, and the two-level detection algorithm is employed in nonhomogeneous environments. Our Monte Carlo simulation results demonstrate that considering different signal noise ratios, the proposed algorithm gives better detection probability than that of "order statistics".

  11. Detecting Community Structure by Using a Constrained Label Propagation Algorithm

    PubMed Central

    Ratnavelu, Kuru

    2016-01-01

    Community structure is considered one of the most interesting features in complex networks. Many real-world complex systems exhibit community structure, where individuals with similar properties form a community. The identification of communities in a network is important for understanding the structure of said network, in a specific perspective. Thus, community detection in complex networks gained immense interest over the last decade. A lot of community detection methods were proposed, and one of them is the label propagation algorithm (LPA). The simplicity and time efficiency of the LPA make it a popular community detection method. However, the LPA suffers from instability detection due to randomness that is induced in the algorithm. The focus of this paper is to improve the stability and accuracy of the LPA, while retaining its simplicity. Our proposed algorithm will first detect the main communities in a network by using the number of mutual neighbouring nodes. Subsequently, nodes are added into communities by using a constrained LPA. Those constraints are then gradually relaxed until all nodes are assigned into groups. In order to refine the quality of the detected communities, nodes in communities can be switched to another community or removed from their current communities at various stages of the algorithm. We evaluated our algorithm on three types of benchmark networks, namely the Lancichinetti-Fortunato-Radicchi (LFR), Relaxed Caveman (RC) and Girvan-Newman (GN) benchmarks. We also apply the present algorithm to some real-world networks of various sizes. The current results show some promising potential, of the proposed algorithm, in terms of detecting communities accurately. Furthermore, our constrained LPA has a robustness and stability that are significantly better than the simple LPA as it is able to yield deterministic results. PMID:27176470

  12. Detecting Community Structure by Using a Constrained Label Propagation Algorithm.

    PubMed

    Chin, Jia Hou; Ratnavelu, Kuru

    2016-01-01

    Community structure is considered one of the most interesting features in complex networks. Many real-world complex systems exhibit community structure, where individuals with similar properties form a community. The identification of communities in a network is important for understanding the structure of said network, in a specific perspective. Thus, community detection in complex networks gained immense interest over the last decade. A lot of community detection methods were proposed, and one of them is the label propagation algorithm (LPA). The simplicity and time efficiency of the LPA make it a popular community detection method. However, the LPA suffers from instability detection due to randomness that is induced in the algorithm. The focus of this paper is to improve the stability and accuracy of the LPA, while retaining its simplicity. Our proposed algorithm will first detect the main communities in a network by using the number of mutual neighbouring nodes. Subsequently, nodes are added into communities by using a constrained LPA. Those constraints are then gradually relaxed until all nodes are assigned into groups. In order to refine the quality of the detected communities, nodes in communities can be switched to another community or removed from their current communities at various stages of the algorithm. We evaluated our algorithm on three types of benchmark networks, namely the Lancichinetti-Fortunato-Radicchi (LFR), Relaxed Caveman (RC) and Girvan-Newman (GN) benchmarks. We also apply the present algorithm to some real-world networks of various sizes. The current results show some promising potential, of the proposed algorithm, in terms of detecting communities accurately. Furthermore, our constrained LPA has a robustness and stability that are significantly better than the simple LPA as it is able to yield deterministic results. PMID:27176470

  13. QuateXelero: An Accelerated Exact Network Motif Detection Algorithm

    PubMed Central

    Khakabimamaghani, Sahand; Sharafuddin, Iman; Dichter, Norbert; Koch, Ina; Masoudi-Nejad, Ali

    2013-01-01

    Finding motifs in biological, social, technological, and other types of networks has become a widespread method to gain more knowledge about these networks’ structure and function. However, this task is very computationally demanding, because it is highly associated with the graph isomorphism which is an NP problem (not known to belong to P or NP-complete subsets yet). Accordingly, this research is endeavoring to decrease the need to call NAUTY isomorphism detection method, which is the most time-consuming step in many existing algorithms. The work provides an extremely fast motif detection algorithm called QuateXelero, which has a Quaternary Tree data structure in the heart. The proposed algorithm is based on the well-known ESU (FANMOD) motif detection algorithm. The results of experiments on some standard model networks approve the overal superiority of the proposed algorithm, namely QuateXelero, compared with two of the fastest existing algorithms, G-Tries and Kavosh. QuateXelero is especially fastest in constructing the central data structure of the algorithm from scratch based on the input network. PMID:23874498

  14. QuateXelero: an accelerated exact network motif detection algorithm.

    PubMed

    Khakabimamaghani, Sahand; Sharafuddin, Iman; Dichter, Norbert; Koch, Ina; Masoudi-Nejad, Ali

    2013-01-01

    Finding motifs in biological, social, technological, and other types of networks has become a widespread method to gain more knowledge about these networks' structure and function. However, this task is very computationally demanding, because it is highly associated with the graph isomorphism which is an NP problem (not known to belong to P or NP-complete subsets yet). Accordingly, this research is endeavoring to decrease the need to call NAUTY isomorphism detection method, which is the most time-consuming step in many existing algorithms. The work provides an extremely fast motif detection algorithm called QuateXelero, which has a Quaternary Tree data structure in the heart. The proposed algorithm is based on the well-known ESU (FANMOD) motif detection algorithm. The results of experiments on some standard model networks approve the overal superiority of the proposed algorithm, namely QuateXelero, compared with two of the fastest existing algorithms, G-Tries and Kavosh. QuateXelero is especially fastest in constructing the central data structure of the algorithm from scratch based on the input network. PMID:23874498

  15. A Comparative Analysis of Community Detection Algorithms on Artificial Networks

    NASA Astrophysics Data System (ADS)

    Yang, Zhao; Algesheimer, René; Tessone, Claudio J.

    2016-08-01

    Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms’ computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm’s predicting power and the effective computing time.

  16. A Comparative Analysis of Community Detection Algorithms on Artificial Networks

    PubMed Central

    Yang, Zhao; Algesheimer, René; Tessone, Claudio J.

    2016-01-01

    Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms’ computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm’s predicting power and the effective computing time. PMID:27476470

  17. An ellipse detection algorithm based on edge classification

    NASA Astrophysics Data System (ADS)

    Yu, Liu; Chen, Feng; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    In order to enhance the speed and accuracy of ellipse detection, an ellipse detection algorithm based on edge classification is proposed. Too many edge points are removed by making edge into point in serialized form and the distance constraint between the edge points. It achieves effective classification by the criteria of the angle between the edge points. And it makes the probability of randomly selecting the edge points falling on the same ellipse greatly increased. Ellipse fitting accuracy is significantly improved by the optimization of the RED algorithm. It uses Euclidean distance to measure the distance from the edge point to the elliptical boundary. Experimental results show that: it can detect ellipse well in case of edge with interference or edges blocking each other. It has higher detecting precision and less time consuming than the RED algorithm.

  18. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions. PMID:26192503

  19. Lidar detection algorithm for time and range anomalies

    NASA Astrophysics Data System (ADS)

    Ben-David, Avishai; Davidson, Charles E.; Vanderbeek, Richard G.

    2007-10-01

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t1 to t2" is addressed, and for range anomaly where the question "is a target present at time t within ranges R1 and R2" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO2 lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  20. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed. PMID:17932542

  1. Respiratory rate detection algorithms by photoplethysmography signal processing.

    PubMed

    Lee, E M; Kim, N H; Trang, N T; Hong, J H; Cha, E J; Lee, T S

    2008-01-01

    Photoplethysmography (PPG) offers the clinically meaningful parameters, such as, heart rate, and respiratory rate. In this study, we presented three respiratory signal detection algorithms using photoplethysmography raw data generated from commercial PPG sensor: (1)Min-Max (2)Peak-to-Peak (3)Pulse Shape. As reference signal, nasal sensor signal was acquired simultaneously and compared and analyzed. We used two types of moving average filtering technique to process three PPG parameters. In laboratory experiment, 6 subjects' PPG signals were measured when they respire ten and fifteen, and arbitrary times per minute. From the results, following conclusions were drawn. Min-Max and Peak-to-Peak algorithms perform better than Pulse shape algorithm. They can be used to detect respiratory rate. But, Pulse Shape algorithm was accurate for subject 4 only. More experimental data is necessary to improve the accuracy and reliability. PMID:19162865

  2. Algorithms for airborne Doppler radar wind shear detection

    NASA Technical Reports Server (NTRS)

    Gillberg, Jeff; Pockrandt, Mitch; Symosek, Peter; Benser, Earl T.

    1992-01-01

    Honeywell has developed algorithms for the detection of wind shear/microburst using airborne Doppler radar. The Honeywell algorithms use three dimensional pattern recognition techniques and the selection of an associated scanning pattern forward of the aircraft. This 'volumetric scan' approach acquires reflectivity, velocity, and spectral width from a three dimensional volume as opposed to the conventional use of a two dimensional azimuthal slice of data at a fixed elevation. The algorithm approach is based on detection and classification of velocity patterns which are indicative of microburst phenomenon while minimizing the false alarms due to ground clutter return. Simulation studies of microburst phenomenon and x-band radar interaction with the microburst have been performed and results of that study are presented. Algorithm performance indetection of both 'wet' and 'dry' microbursts is presented.

  3. Self-contained algorithms to detect communities in networks

    NASA Astrophysics Data System (ADS)

    Castellano, C.; Cecconi, F.; Loreto, V.; Parisi, D.; Radicchi, F.

    2004-03-01

    The investigation of community structures in networks is an important issue in many domains and disciplines. In this paper we present a new class of local and fast algorithms which incorporate a quantitative definition of community. In this way the algorithms for the identification of the community structure become fully self-contained and one does not need additional non-topological information in order to evaluate the accuracy of the results. The new algorithms are tested on artificial and real-world graphs. In particular we show how the new algorithms apply to a network of scientific collaborations both in the unweighted and in the weighted version. Moreover we discuss the applicability of these algorithms to other non-social networks and we present preliminary results about the detection of community structures in networks of interacting proteins.

  4. Adaptive clustering algorithm for community detection in complex networks.

    PubMed

    Ye, Zhenqing; Hu, Songnian; Yu, Jun

    2008-10-01

    Community structure is common in various real-world networks; methods or algorithms for detecting such communities in complex networks have attracted great attention in recent years. We introduced a different adaptive clustering algorithm capable of extracting modules from complex networks with considerable accuracy and robustness. In this approach, each node in a network acts as an autonomous agent demonstrating flocking behavior where vertices always travel toward their preferable neighboring groups. An optimal modular structure can emerge from a collection of these active nodes during a self-organization process where vertices constantly regroup. In addition, we show that our algorithm appears advantageous over other competing methods (e.g., the Newman-fast algorithm) through intensive evaluation. The applications in three real-world networks demonstrate the superiority of our algorithm to find communities that are parallel with the appropriate organization in reality. PMID:18999501

  5. Automatic fringe detection algorithm used for moire deflectometry.

    PubMed

    Servin, M; Rodriguez-Vera, R; Carpio, M; Morales, A

    1990-08-01

    An automatic fringe detection algorithm applied to moire deflectometry is presented. This algorithm is based on a set of points linked together and with a behavior similar to a rubber band, in which these points are attracted to fit the moire fringes. The collective behavior of these points gives rise to a final state which is their regularly spaced alignment along the fringe pattern. The algorithm is dynamic in the sense that it tracks the fringe even when it suffers continuous deformations. Once the rubber band is adapted, the rubber band's points coordinates are obtained and their distance to the starting straight line is found, as required by moire deflectometry. PMID:20567408

  6. SIDRA: a blind algorithm for signal detection in photometric surveys

    NASA Astrophysics Data System (ADS)

    Mislis, D.; Bachelet, E.; Alsubai, K. A.; Bramich, D. M.; Parley, N.

    2016-01-01

    We present the Signal Detection using Random-Forest Algorithm (SIDRA). SIDRA is a detection and classification algorithm based on the Machine Learning technique (Random Forest). The goal of this paper is to show the power of SIDRA for quick and accurate signal detection and classification. We first diagnose the power of the method with simulated light curves and try it on a subset of the Kepler space mission catalogue. We use five classes of simulated light curves (CONSTANT, TRANSIT, VARIABLE, MLENS and EB for constant light curves, transiting exoplanet, variable, microlensing events and eclipsing binaries, respectively) to analyse the power of the method. The algorithm uses four features in order to classify the light curves. The training sample contains 5000 light curves (1000 from each class) and 50 000 random light curves for testing. The total SIDRA success ratio is ≥90 per cent. Furthermore, the success ratio reaches 95-100 per cent for the CONSTANT, VARIABLE, EB and MLENS classes and 92 per cent for the TRANSIT class with a decision probability of 60 per cent. Because the TRANSIT class is the one which fails the most, we run a simultaneous fit using SIDRA and a Box Least Square (BLS)-based algorithm for searching for transiting exoplanets. As a result, our algorithm detects 7.5 per cent more planets than a classic BLS algorithm, with better results for lower signal-to-noise light curves. SIDRA succeeds to catch 98 per cent of the planet candidates in the Kepler sample and fails for 7 per cent of the false alarms subset. SIDRA promises to be useful for developing a detection algorithm and/or classifier for large photometric surveys such as TESS and PLATO exoplanet future space missions.

  7. A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms

    SciTech Connect

    Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.; Gosink, Luke J.; Anderson, Richard M.; Hays, Spencer E.; Tardiff, Mark F.

    2013-07-01

    There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another, our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.

  8. An improved algorithm for pedestrian detection

    NASA Astrophysics Data System (ADS)

    Yousef, Amr; Duraisamy, Prakash; Karim, Mohammad

    2015-03-01

    In this paper we present a technique to detect pedestrian. Histogram of gradients (HOG) and Haar wavelets with the aid of support vector machines (SVM) and AdaBoost classifiers show good identification performance on different objects classification including pedestrians. We propose a new shape descriptor derived from the intra-relationship between gradient orientations in a way similar to the HOG. The proposed descriptor is a two 2-D grid of orientation similarities measured at different offsets. The gradient magnitudes and phases derived from a sliding window with different scales and sizes are used to construct two 2-D symmetric grids. The first grid measures the co-occurence of the phases while the other one measures the corresponding percentage of gradient magnitudes for the measured orientation similarity. Since the resultant matrices will be symmetric, the feature vector is formed by concatenating the upper diagonal grid coefficients collected in a raster way. Classification is done using SVM classifier with radial basis kernel. Experimental results show improved performance compared to the current state-of-art techniques.

  9. The use of microangiography in detecting aberrant vasculature in zebrafish embryos exposed to cadmium.

    PubMed

    Cheng, S H; Chan, P K; Wu, R S

    2001-03-01

    Embryonic vascular patterns in zebrafish (Danio rerio) could be visualised by confocal microscopy coupled with microinjected fluorescent microbeads. This microangiographic technique was adopted here, for the first time, to study the effects of cadmium on cardiovascular development in zebrafish embryos. Zebrafish embryos were incubated in culture medium containing 100 microM cadmium from 5 h post fertilisation (hpf) to 48 hpf. At 48 hpf, embryos were examined for viability and occurrence of malformations. The 100 microM cadmium caused 32.21 +/- 3.65% mortality and 20.33 +/- 4.04% visible malformations in surviving embryos. In the remaining embryos with no visible signs of malformations, further assessments for less obvious abnormalities were performed. Assessments on craniofacial development were made by digital measurements on areas of brains and eyes. Cardiac development was assessed by immunostaining the heart with the antibody MF20 specific for myosin heavy chain. Body lengths of the embryos were also measured. Embryonic development of brains, eyes, hearts and body lengths of visibly healthy embryos in the cadmium treatment group showed no significant difference from the controls. Embryonic vasculature of these visibly healthy embryos was then studied by microinjecting fluorescent microbeads of diameter 0.02 microm into the circulation. All the cadmium treated embryos showed localised vascular defects in the dorsal aortae, segmental and cranial vessels while none of the control embryos showed any aberrant patterns in the networking of the vasculature. Improved image analyses on the anterior regions revealed that cadmium treated embryos had markedly less complex networks of cranial vessels with fewer vessels perfusing the craniofacial regions. The number of branch points in the vascular network was counted. In untreated embryos, there were 135.6 +/- 51 branches in the vasculature in entire body. In the cadmium treated embryos, there were 64.5+/-31 branches. The

  10. Automated choroidal neovascularization detection algorithm for optical coherence tomography angiography

    PubMed Central

    Liu, Li; Gao, Simon S.; Bailey, Steven T.; Huang, David; Li, Dengwang; Jia, Yali

    2015-01-01

    Optical coherence tomography angiography has recently been used to visualize choroidal neovascularization (CNV) in participants with age-related macular degeneration. Identification and quantification of CNV area is important clinically for disease assessment. An automated algorithm for CNV area detection is presented in this article. It relies on denoising and a saliency detection model to overcome issues such as projection artifacts and the heterogeneity of CNV. Qualitative and quantitative evaluations were performed on scans of 7 participants. Results from the algorithm agreed well with manual delineation of CNV area. PMID:26417524

  11. Vision-based vehicle detection and tracking algorithm design

    NASA Astrophysics Data System (ADS)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  12. Algorithms for rapid outbreak detection: a research synthesis.

    PubMed

    Buckeridge, David L; Burkom, Howard; Campbell, Murray; Hogan, William R; Moore, Andrew W

    2005-04-01

    The threat of bioterrorism has stimulated interest in enhancing public health surveillance to detect disease outbreaks more rapidly than is currently possible. To advance research on improving the timeliness of outbreak detection, the Defense Advanced Research Project Agency sponsored the Bio-event Advanced Leading Indicator Recognition Technology (BioALIRT) project beginning in 2001. The purpose of this paper is to provide a synthesis of research on outbreak detection algorithms conducted by academic and industrial partners in the BioALIRT project. We first suggest a practical classification for outbreak detection algorithms that considers the types of information encountered in surveillance analysis. We then present a synthesis of our research according to this classification. The research conducted for this project has examined how to use spatial and other covariate information from disparate sources to improve the timeliness of outbreak detection. Our results suggest that use of spatial and other covariate information can improve outbreak detection performance. We also identified, however, methodological challenges that limited our ability to determine the benefit of using outbreak detection algorithms that operate on large volumes of data. Future research must address challenges such as forecasting expected values in high-dimensional data and generating spatial and multivariate test data sets. PMID:15797000

  13. Interphase Molecular Cytogenetic Detection Rates of Chronic Lymphocytic Leukemia-Specific Aberrations Are Higher in Cultivated Cells Than in Blood or Bone Marrow Smears.

    PubMed

    Alhourani, Eyad; Aroutiounian, Rouben; Harutyunyan, Tigran; Glaser, Anita; Schlie, Cordula; Pohle, Beate; Liehr, Thomas

    2016-08-01

    Banding cytogenetics is still the gold standard in many fields of leukemia diagnostics. However, in chronic lymphocytic leukemia (CLL), GTG-banding results are hampered by a low mitotic rate of the corresponding malignant lymphatic cells. Thus, interphase fluorescence in situ hybridization (iFISH) for the detection of specific cytogenetic aberrations is done nowadays as a supplement to or even instead of banding cytogenetics in many diagnostic laboratories. These iFISH studies can be performed on native blood or bone marrow smears or in nuclei after cultivation and stimulation by a suitable mitogen. As there are only few comparative studies with partially conflicting results for the detection rates of aberrations in cultivated and native cells, this question was studied in 38 CLL cases with known aberrations in 11q22.2, 11q22.3, 12, 13q14.3, 14q32.33, 17p13.1, or 18q21.32. The obtained results implicate that iFISH directly applied on smears is in general less efficient for the detection of CLL-specific genetic abnormalities than for cultivated cells. This also shows that applied cell culture conditions are well suited for malignant CLL cells. Thus, to detect malignant aberrant cells in CLL, cell cultivation and cytogenetic workup should be performed and the obtained material should be subjected to banding cytogenetics and iFISH. PMID:27315825

  14. A TCAS-II Resolution Advisory Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Narkawicz, Anthony; Chamberlain, James

    2013-01-01

    The Traffic Alert and Collision Avoidance System (TCAS) is a family of airborne systems designed to reduce the risk of mid-air collisions between aircraft. TCASII, the current generation of TCAS devices, provides resolution advisories that direct pilots to maintain or increase vertical separation when aircraft distance and time parameters are beyond designed system thresholds. This paper presents a mathematical model of the TCASII Resolution Advisory (RA) logic that assumes accurate aircraft state information. Based on this model, an algorithm for RA detection is also presented. This algorithm is analogous to a conflict detection algorithm, but instead of predicting loss of separation, it predicts resolution advisories. It has been formally verified that for a kinematic model of aircraft trajectories, this algorithm completely and correctly characterizes all encounter geometries between two aircraft that lead to a resolution advisory within a given lookahead time interval. The RA detection algorithm proposed in this paper is a fundamental component of a NASA sense and avoid concept for the integration of Unmanned Aircraft Systems in civil airspace.

  15. A Generative Statistical Algorithm for Automatic Detection of Complex Postures

    PubMed Central

    Amit, Yali; Biron, David

    2015-01-01

    This paper presents a method for automated detection of complex (non-self-avoiding) postures of the nematode Caenorhabditis elegans and its application to analyses of locomotion defects. Our approach is based on progressively detailed statistical models that enable detection of the head and the body even in cases of severe coilers, where data from traditional trackers is limited. We restrict the input available to the algorithm to a single digitized frame, such that manual initialization is not required and the detection problem becomes embarrassingly parallel. Consequently, the proposed algorithm does not propagate detection errors and naturally integrates in a “big data” workflow used for large-scale analyses. Using this framework, we analyzed the dynamics of postures and locomotion of wild-type animals and mutants that exhibit severe coiling phenotypes. Our approach can readily be extended to additional automated tracking tasks such as tracking pairs of animals (e.g., for mating assays) or different species. PMID:26439258

  16. Improved algorithm for quantum separability and entanglement detection

    SciTech Connect

    Ioannou, L.M.; Ekert, A.K.; Travaglione, B.C.; Cheung, D.

    2004-12-01

    Determining whether a quantum state is separable or entangled is a problem of fundamental importance in quantum information science. It has recently been shown that this problem is NP-hard, suggesting that an efficient, general solution does not exist. There is a highly inefficient 'basic algorithm' for solving the quantum separability problem which follows from the definition of a separable state. By exploiting specific properties of the set of separable states, we introduce a classical algorithm that solves the problem significantly faster than the 'basic algorithm', allowing a feasible separability test where none previously existed, e.g., in 3x3-dimensional systems. Our algorithm also provides a unique tool in the experimental detection of entanglement.

  17. An Adaptive Immune Genetic Algorithm for Edge Detection

    NASA Astrophysics Data System (ADS)

    Li, Ying; Bai, Bendu; Zhang, Yanning

    An adaptive immune genetic algorithm (AIGA) based on cost minimization technique method for edge detection is proposed. The proposed AIGA recommends the use of adaptive probabilities of crossover, mutation and immune operation, and a geometric annealing schedule in immune operator to realize the twin goals of maintaining diversity in the population and sustaining the fast convergence rate in solving the complex problems such as edge detection. Furthermore, AIGA can effectively exploit some prior knowledge and information of the local edge structure in the edge image to make vaccines, which results in much better local search ability of AIGA than that of the canonical genetic algorithm. Experimental results on gray-scale images show the proposed algorithm perform well in terms of quality of the final edge image, rate of convergence and robustness to noise.

  18. The Effect of Algorithms on Copy Number Variant Detection

    PubMed Central

    Ely, Benjamin; Chi, Peter; Wang, Kenneth; Raskind, Wendy H.; Kim, Sulgi; Brkanac, Zoran; Yu, Chang-En

    2010-01-01

    Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed

  19. Staff line detection and revision algorithm based on subsection projection and correlation algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yin-xian; Yang, Ding-li

    2013-03-01

    Staff line detection plays a key role in OMR technology, and is the precon-ditions of subsequent segmentation 1& recognition of music sheets. For the phenomena of horizontal inclination & curvature of staff lines and vertical inclination of image, which often occur in music scores, an improved approach based on subsection projection is put forward to realize the detection of original staff lines and revision in an effect to implement staff line detection more successfully. Experimental results show the presented algorithm can detect and revise staff lines fast and effectively.

  20. Information dynamics algorithm for detecting communities in networks

    NASA Astrophysics Data System (ADS)

    Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro

    2012-11-01

    The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.

  1. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    ERIC Educational Resources Information Center

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  2. DDoS Attack Detection Algorithms Based on Entropy Computing

    NASA Astrophysics Data System (ADS)

    Li, Liying; Zhou, Jianying; Xiao, Ning

    Distributed Denial of Service (DDoS) attack poses a severe threat to the Internet. It is difficult to find the exact signature of attacking. Moreover, it is hard to distinguish the difference of an unusual high volume of traffic which is caused by the attack or occurs when a huge number of users occasionally access the target machine at the same time. The entropy detection method is an effective method to detect the DDoS attack. It is mainly used to calculate the distribution randomness of some attributes in the network packets' headers. In this paper, we focus on the detection technology of DDoS attack. We improve the previous entropy detection algorithm, and propose two enhanced detection methods based on cumulative entropy and time, respectively. Experiment results show that these methods could lead to more accurate and effective DDoS detection.

  3. An Efficient Conflict Detection Algorithm for Packet Filters

    NASA Astrophysics Data System (ADS)

    Lee, Chun-Liang; Lin, Guan-Yu; Chen, Yaw-Chung

    Packet classification is essential for supporting advanced network services such as firewalls, quality-of-service (QoS), virtual private networks (VPN), and policy-based routing. The rules that routers use to classify packets are called packet filters. If two or more filters overlap, a conflict occurs and leads to ambiguity in packet classification. This study proposes an algorithm that can efficiently detect and resolve filter conflicts using tuple based search. The time complexity of the proposed algorithm is O(nW+s), and the space complexity is O(nW), where n is the number of filters, W is the number of bits in a header field, and s is the number of conflicts. This study uses the synthetic filter databases generated by ClassBench to evaluate the proposed algorithm. Simulation results show that the proposed algorithm can achieve better performance than existing conflict detection algorithms both in time and space, particularly for databases with large numbers of conflicts.

  4. A bioinspired collision detection algorithm for VLSI implementation

    NASA Astrophysics Data System (ADS)

    Cuadri, J.; Linan, G.; Stafford, R.; Keil, M. S.; Roca, E.

    2005-06-01

    In this paper a bioinspired algorithm for collision detection is proposed, based on previous models of the locust (Locusta migratoria) visual system reported by F.C. Rind and her group, in the University of Newcastle-upon-Tyne. The algorithm is suitable for VLSI implementation in standard CMOS technologies as a system-on-chip for automotive applications. The working principle of the algorithm is to process a video stream that represents the current scenario, and to fire an alarm whenever an object approaches on a collision course. Moreover, it establishes a scale of warning states, from no danger to collision alarm, depending on the activity detected in the current scenario. In the worst case, the minimum time before collision at which the model fires the collision alarm is 40 msec (1 frame before, at 25 frames per second). Since the average time to successfully fire an airbag system is 2 msec, even in the worst case, this algorithm would be very helpful to more efficiently arm the airbag system, or even take some kind of collision avoidance countermeasures. Furthermore, two additional modules have been included: a "Topological Feature Estimator" and an "Attention Focusing Algorithm". The former takes into account the shape of the approaching object to decide whether it is a person, a road line or a car. This helps to take more adequate countermeasures and to filter false alarms. The latter centres the processing power into the most active zones of the input frame, thus saving memory and processing time resources.

  5. Toward an Objective Enhanced-V Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Brunner, Jason; Feltz, Wayne; Moses, John; Rabin, Robert; Ackerman, Steven

    2007-01-01

    The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V' signature, has been observed to occur during and preceding severe weather in previous studies. This study describes an algorithmic approach to objectively detect enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of cross correlation statistics of pixels and thresholds of enhanced-V quantitative parameters. The effectiveness of the enhanced-V detection method will be examined using Geostationary Operational Environmental Satellite, MODerate-resolution Imaging Spectroradiometer, and Advanced Very High Resolution Radiometer image data from case studies in the 2003-2006 seasons. The main goal of this study is to develop an objective enhanced-V detection algorithm for future implementation into operations with future sensors, such as GOES-R.

  6. Detection algorithms for ultrawideband foliage-penetration radar

    NASA Astrophysics Data System (ADS)

    Nguyen, Lam H.; Kapoor, Ravinder; Sichina, Jeffrey

    1997-06-01

    The Army Research Laboratory (ARL), as part of its mission- funded exploratory development program, has been evaluating the use of a low-frequency, ultra-wideband imaging radar to detect tactical vehicles concealed by foliage. An instrumentation-grade measurement system has been designed and implemented by ARL. Extensive testing of this radar over the preceding 18 months has led to the establishment of a significant and unique data base of radar imagery. We are currently using these data to develop target detection algorithms which can aid an operator in separating vehicles of interest from background. This paper provides early findings from the algorithm development effort. To date, our efforts have concentrated on identifying computationally simple strategies for canvassing large areas for likely target occurrences--i.e., prescreening of the imagery. Phenomenologically-sound features are being evaluated for discrimination capability. Performance assessments, in terms of receiver operating characteristics, detail detection capabilities at various false alarm rates.

  7. Advanced defect detection algorithm using clustering in ultrasonic NDE

    NASA Astrophysics Data System (ADS)

    Gongzhang, Rui; Gachagan, Anthony

    2016-02-01

    A range of materials used in industry exhibit scattering properties which limits ultrasonic NDE. Many algorithms have been proposed to enhance defect detection ability, such as the well-known Split Spectrum Processing (SSP) technique. Scattering noise usually cannot be fully removed and the remaining noise can be easily confused with real feature signals, hence becoming artefacts during the image interpretation stage. This paper presents an advanced algorithm to further reduce the influence of artefacts remaining in A-scan data after processing using a conventional defect detection algorithm. The raw A-scan data can be acquired from either traditional single transducer or phased array configurations. The proposed algorithm uses the concept of unsupervised machine learning to cluster segmental defect signals from pre-processed A-scans into different classes. The distinction and similarity between each class and the ensemble of randomly selected noise segments can be observed by applying a classification algorithm. Each class will then be labelled as `legitimate reflector' or `artefacts' based on this observation and the expected probability of defection (PoD) and probability of false alarm (PFA) determined. To facilitate data collection and validate the proposed algorithm, a 5MHz linear array transducer is used to collect A-scans from both austenitic steel and Inconel samples. Each pulse-echo A-scan is pre-processed using SSP and the subsequent application of the proposed clustering algorithm has provided an additional reduction to PFA while maintaining PoD for both samples compared with SSP results alone.

  8. Common Pharmacophore Identification Using Frequent Clique Detection Algorithm

    PubMed Central

    Podolyan, Yevgeniy; Karypis, George

    2008-01-01

    The knowledge of a pharmacophore, or the 3D arrangement of features in the biologically active molecule that is responsible for its pharmacological activity, can help in the search and design of a new or better drug acting upon the same or related target. In this paper we describe two new algorithms based on the frequent clique detection in the molecular graphs. The first algorithm mines all frequent cliques that are present in at least one of the conformers of each (or a portion of all) molecules. The second algorithm exploits the similarities among the different conformers of the same molecule and achieves an order of magnitude performance speedup compared to the first algorithm. Both algorithms are guaranteed to find all common pharmacophores in the dataset, which is confirmed by the validation on the set of molecules for which pharmacophores have been determined experimentally. In addition, these algorithms are able to scale to datasets with arbitrarily large number of conformers per molecule and identify multiple ligand binding modes or multiple binding sites of the target. PMID:19072298

  9. SETI Pulse Detection Algorithm: Analysis of False-alarm Rates

    NASA Technical Reports Server (NTRS)

    Levitt, B. K.

    1983-01-01

    Some earlier work by the Search for Extraterrestrial Intelligence (SETI) Science Working Group (SWG) on the derivation of spectrum analyzer thresholds for a pulse detection algorithm based on an analysis of false alarm rates is extended. The algorithm previously analyzed was intended to detect a finite sequence of i periodically spaced pulses that did not necessarily occupy the entire observation interval. This algorithm would recognize the presence of such a signal only if all i-received pulse powers exceeded a threshold T(i): these thresholds were selected to achieve a desired false alarm rate, independent of i. To simplify the analysis, it was assumed that the pulses were synchronous with the spectrum sample times. This analysis extends the earlier effort to include infinite and/or asynchronous pulse trains. Furthermore, to decrease the possibility of missing an extraterrestrial intelligence signal, the algorithm was modified to detect a pulse train even if some of the received pulse powers fall below the threshold. The analysis employs geometrical arguments that make it conceptually easy to incorporate boundary conditions imposed on the derivation of the false alarm rates. While the exact results can be somewhat complex, simple closed form approximations are derived that produce a negligible loss of accuracy.

  10. Density shrinking algorithm for community detection with path based similarity

    NASA Astrophysics Data System (ADS)

    Wu, Jianshe; Hou, Yunting; Jiao, Yang; Li, Yong; Li, Xiaoxiao; Jiao, Licheng

    2015-09-01

    Community structure is ubiquitous in real world complex networks. Finding the communities is the key to understand the functions of those networks. A lot of works have been done in designing algorithms for community detection, but it remains a challenge in the field. Traditional modularity optimization suffers from the resolution limit problem. Recent researches show that combining the density based technique with the modularity optimization can overcome the resolution limit and an efficient algorithm named DenShrink was provided. The main procedure of DenShrink is repeatedly finding and merging micro-communities (broad sense) into super nodes until they cannot merge. Analyses in this paper show that if the procedure is replaced by finding and merging only dense pairs, both of the detection accuracy and runtime can be obviously improved. Thus an improved density-based algorithm: ImDS is provided. Since the time complexity, path based similarity indexes are difficult to be applied in community detection for high performance. In this paper, the path based Katz index is simplified and used in the ImDS algorithm.

  11. Clever eye algorithm for target detection of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Geng, Xiurui; Ji, Luyan; Sun, Kang

    2016-04-01

    Target detection algorithms for hyperspectral remote sensing imagery, such as the two most commonly used remote sensing detection algorithms, the constrained energy minimization (CEM) and matched filter (MF), can usually be attributed to the inner product between a weight filter (or detector) and a pixel vector. CEM and MF have the same expression except that MF requires data centralization first. However, this difference leads to a difference in the target detection results. That is to say, the selection of the data origin could directly affect the performance of the detector. Therefore, does there exist another data origin other than the zero and mean-vector points for a better target detection performance? This is a very meaningful issue in the field of target detection, but it has not been paid enough attention yet. In this study, we propose a novel objective function by introducing the data origin as another variable, and the solution of the function is corresponding to the data origin with the minimal output energy. The process of finding the optimal solution can be vividly regarded as a clever eye automatically searching the best observing position and direction in the feature space, which corresponds to the largest separation between the target and background. Therefore, this new algorithm is referred to as the clever eye algorithm (CE). Based on the Sherman-Morrison formula and the gradient ascent method, CE could derive the optimal target detection result in terms of energy. Experiments with both synthetic and real hyperspectral data have verified the effectiveness of our method.

  12. Polarization Aberrations

    NASA Technical Reports Server (NTRS)

    Mcguire, James P., Jr.; Chipman, Russell A.

    1990-01-01

    The analysis of the polarization characteristics displayed by optical systems can be divided into two categories: geometrical and physical. Geometrical analysis calculates the change in polarization of a wavefront between pupils in an optical instrument. Physical analysis propagates the polarized fields wherever the geometrical analysis is not valid, i.e., near the edges of stops, near images, in anisotropic media, etc. Polarization aberration theory provides a starting point for geometrical design and facilitates subsequent optimization. The polarization aberrations described arise from differences in the transmitted (or reflected) amplitudes and phases at interfaces. The polarization aberration matrix (PAM) is calculated for isotropic rotationally symmetric systems through fourth order and includes the interface phase, amplitude, linear diattenuation, and linear retardance aberrations. The exponential form of Jones matrices used are discussed. The PAM in Jones matrix is introduced. The exact calculation of polarization aberrations through polarization ray tracing is described. The report is divided into three sections: I. Rotationally Symmetric Optical Systems; II. Tilted and Decentered Optical Systems; and Polarization Analysis of LIDARs.

  13. Detection of parametric curves based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Li, Haimin; Wu, Chengke

    1998-09-01

    Detection of curves with special shapes has been put on great interest in the fields of image processing and recognition. Some commonly used algorithms such as Hough Transform and Generalized Radon Transform are global search methods. When the number of parameters increases, their efficiencies decrease rapidly because of the expansion of parameter space. To solve this problem, a new method based on Genetic Algorithm is presented which combines a local search procedure to improve its performance. Experimental results show that the proposed method improves search efficiency greatly.

  14. Multi-Objective Community Detection Based on Memetic Algorithm

    PubMed Central

    2015-01-01

    Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646

  15. Detection Algorithms of the Seismic Alert System of Mexico (SASMEX)

    NASA Astrophysics Data System (ADS)

    Cuellar Martinez, A.; Espinosa Aranda, J.; Ramos Perez, S.; Ibarrola Alvarez, G.; Zavala Guerrero, M.; Sasmex

    2013-05-01

    The importance of a rapid and reliable detection of an earthquake, allows taking advantage with more opportunity time of any possible opportunity warnings to the population. Thus detection algorithms in the sensing field station (FS) of an earthquake early earning system, must have a high rate of correct detection; this condition lets perform numerical processes to obtain appropriate parameters for the alert activation. During the evolution and continuous service of the Mexican Seismic Alert System (SASMEX) in more than 23 operation years, it has used various methodologies in the detection process to get the largest opportunity time when an earthquake occurs and it is alerted. In addition to the characteristics of the acceleration signal observed in sensing field stations, it is necessary the site conditions reducing urban noise, but sometimes it is not present through of the first operation years, however, urban growth near to FS cause urban noise, which should be tolerated while carrying out the relocation process of the station, and in the algorithm design should be contemplating the robustness to reduce possible errors and false detections. This work presents some results on detection algorithms used in Mexico for early warning systems for earthquakes considering recent events and different opportunity times obtained depending of the detections on P and S phases of the earthquake detected in the station. Some methodologies are reviewed and described in detail in this work and the main features implemented in The Seismic Alert System of Mexico City (SAS), in continuous operation since 1991, and the Seismic Alert System of Oaxaca City (SASO), today both comprise the SASMEX.

  16. Multiplex Ligation-Dependent Probe Amplification Versus Multiprobe Fluorescence in Situ Hybridization To Detect Genomic Aberrations in Chronic Lymphocytic Leukemia

    PubMed Central

    Al Zaabi, Eiman A.; Fernandez, Louis A.; Sadek, Irene A.; Riddell, D. Christie; Greer, Wenda L.

    2010-01-01

    Cytogenetic abnormalities play a major role in the prognosis of patients with chronic lymphocytic leukemia (CLL). Several methods have emerged to try to best identify these abnormalities. We used fluorescence in situ hybridization (FISH) to determine the frequency of cytogenetic changes in our CLL patient population. We also evaluated the effectiveness of multiplex ligation-dependent probe amplification (MLPA) in detecting these abnormalities. Sixty-two B-CLL patients and 20 healthy controls were enrolled, and FISH and MLPA analyses were performed on peripheral blood samples. Using FISH, genomic aberrations were found in 73% of patients and presented as follows: single 13q14.3 deletion (60%), trisomy 12 (7%), ATM deletion (6%), 17p13.1 deletion (2%). MLPA analyses done on 61/62 patients showed sensitivity and specificity values of 90% and 100% respectively. MLPA revealed several additional copy number changes, the most common being 19p13 (LDLR and CDKN2D). Moreover, the cost for MLPA analysis, including technical time and reagents, is 86% less than FISH. In conclusion, cytogenetic abnormalities are a common finding in CLL patients, and MLPA is a reliable approach that is more cost effective and faster than FISH. Despite MLPA limitations of sensitivity, it can be used as a first-line screen and complementary test to FISH analysis. PMID:20093390

  17. Shared clonal cytogenetic abnormalities in aberrant mast cells and leukemic myeloid blasts detected by single nucleotide polymorphism microarray-based whole-genome scanning.

    PubMed

    Frederiksen, John K; Shao, Lina; Bixby, Dale L; Ross, Charles W

    2016-04-01

    Systemic mastocytosis (SM) is characterized by a clonal proliferation of aberrant mast cells within extracutaneous sites. In a subset of SM cases, a second associated hematologic non-mast cell disease (AHNMD) is also present, usually of myeloid origin. Polymerase chain reaction and targeted fluorescence in situ hybridization studies have provided evidence that, in at least some cases, the aberrant mast cells are related clonally to the neoplastic cells of the AHNMD. In this work, a single nucleotide polymorphism microarray (SNP-A) was used to characterize the cytogenetics of the aberrant mast cells from a patient with acute myeloid leukemia and concomitant mast cell leukemia associated with a KIT D816A mutation. The results demonstrate the presence of shared cytogenetic abnormalities between the mast cells and myeloid blasts, as well as additional abnormalities within mast cells (copy-neutral loss of heterozygosity) not detectable by routine karyotypic analysis. To our knowledge, this work represents the first application of SNP-A whole-genome scanning to the detection of shared cytogenetic abnormalities between the two components of a case of SM-AHNMD. The findings provide additional evidence of a frequent clonal link between aberrant mast cells and cells of myeloid AHNMDs, and also highlight the importance of direct sequencing for identifying uncommon activating KIT mutations. PMID:26865278

  18. Detection algorithm of big bandwidth chirp signals based on STFT

    NASA Astrophysics Data System (ADS)

    Wang, Jinzhen; Wu, Juhong; Su, Shaoying; Chen, Zengping

    2014-10-01

    Aiming at solving the problem of detecting the wideband chirp signals under low Signal-to-Noise Ratio (SNR) condition, an effective signal detection algorithm based on Short-Time-Fourier-Transform (STFT) is proposed. Considering the characteristic of dispersion of noise spectrum and concentration of chirp spectrum, STFT is performed on chirp signals with Gauss window by fixed step, and these frequencies of peak spectrum obtained from every STFT are in correspondence to the time of every stepped window. Then, the frequencies are binarized and the approach similar to mnk method in time domain is used to detect the chirp pulse signal and determine the coarse starting time and ending time. Finally, the data segments, where the former starting time and ending time locate, are subdivided into many segments evenly, on which the STFT is implemented respectively. By that, the precise starting and ending time are attained. Simulations shows that when the SNR is higher than -28dB, the detection probability is not less than 99% and false alarm probability is zero, and also good estimation accuracy of starting and ending time is acquired. The algorithm is easy to realize and surpasses FFT in computation when the width of STFT window and step length are selected properly, so the presented algorithm has good engineering value.

  19. Evaluation of hybrids algorithms for mass detection in digitalized mammograms

    NASA Astrophysics Data System (ADS)

    Cordero, José; Garzón Reyes, Johnson

    2011-01-01

    The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.

  20. Performance of a community detection algorithm based on semidefinite programming

    NASA Astrophysics Data System (ADS)

    Ricci-Tersenghi, Federico; Javanmard, Adel; Montanari, Andrea

    2016-03-01

    The problem of detecting communities in a graph is maybe one the most studied inference problems, given its simplicity and widespread diffusion among several disciplines. A very common benchmark for this problem is the stochastic block model or planted partition problem, where a phase transition takes place in the detection of the planted partition by changing the signal-to-noise ratio. Optimal algorithms for the detection exist which are based on spectral methods, but we show these are extremely sensible to slight modification in the generative model. Recently Javanmard, Montanari and Ricci-Tersenghi [1] have used statistical physics arguments, and numerical simulations to show that finding communities in the stochastic block model via semidefinite programming is quasi optimal. Further, the resulting semidefinite relaxation can be solved efficiently, and is very robust with respect to changes in the generative model. In this paper we study in detail several practical aspects of this new algorithm based on semidefinite programming for the detection of the planted partition. The algorithm turns out to be very fast, allowing the solution of problems with O(105) variables in few second on a laptop computer.

  1. Localization of tumors in various organs, using edge detection algorithms

    NASA Astrophysics Data System (ADS)

    López Vélez, Felipe

    2015-09-01

    The edge of an image is a set of points organized in a curved line, where in each of these points the brightness of the image changes abruptly, or has discontinuities, in order to find these edges there will be five different mathematical methods to be used and later on compared with its peers, this is with the aim of finding which of the methods is the one that can find the edges of any given image. In this paper these five methods will be used for medical purposes in order to find which one is capable of finding the edges of a scanned image more accurately than the others. The problem consists in analyzing the following two biomedicals images. One of them represents a brain tumor and the other one a liver tumor. These images will be analyzed with the help of the five methods described and the results will be compared in order to determine the best method to be used. It was decided to use different algorithms of edge detection in order to obtain the results shown below; Bessel algorithm, Morse algorithm, Hermite algorithm, Weibull algorithm and Sobel algorithm. After analyzing the appliance of each of the methods to both images it's impossible to determine the most accurate method for tumor detection due to the fact that in each case the best method changed, i.e., for the brain tumor image it can be noticed that the Morse method was the best at finding the edges of the image but for the liver tumor image it was the Hermite method. Making further observations it is found that Hermite and Morse have for these two cases the lowest standard deviations, concluding that these two are the most accurate method to find the edges in analysis of biomedical images.

  2. Algorithm for detecting energy diversion. [Appendix contains an annotated bibliography

    SciTech Connect

    Altschul, R.E.; Janky, D.G.; Scholz, F.W.; Tjoelker, R.A.; Tosch, T.J.; Tosch, T.J. )

    1991-08-01

    The objective of this project was to investigate those factors influencing energy consumption and to develop advanced statistical algorithms and a corresponding computer program to aid utilities in identifying energy diversion by analyzing patterns of energy consumption and other factors readily available to the utility. This final report documents the development of the algorithms, the methodologies used in analyzing their validity, and the advantages and disadvantages of these methods that resulted from these analyses. In the internal study, the algorithms appear to discriminate diverters from the rest of the population. Problems were found that decreased the efficiency of the algorithms during the field investigation. These included the quality of the data used by the algorithms, and the incomplete description of diverters. For the external study 300 potential diverters were submitted to the participating utility for field investigation. They found many cases of vacancies, seasonal use, two cases of tampering, several of suspect nature, irregularities in billing, and a number of meter problems. The code development was not undertaken due to the inconclusive nature of the results obtained in the external validation of the algorithms. Two final recommendations are presented. The first, to create better profiles for diverters and nondiverters, a large sample investigation of electric utility customers should be conducted. This would eliminate selection bias problems perceived to be present in current data. The second provides a list of action items that can be taken by the utilities, to improve both present detection methods and any algorithms that may be developed in the future. 64 refs., 39 figs., 32 figs.

  3. SEU-tolerant IQ detection algorithm for LLRF accelerator system

    NASA Astrophysics Data System (ADS)

    Grecki, M.

    2007-08-01

    High-energy accelerators use RF field to accelerate charged particles. Measurements of effective field parameters (amplitude and phase) are tasks of great importance in these facilities. The RF signal is downconverted in frequency but keeping the information about amplitude and phase and then sampled in ADC. One of the several tasks for LLRF control system is to estimate the amplitude and phase (or I and Q components) of the RF signal. These parameters are further used in the control algorithm. The XFEL accelerator will be built using a single-tunnel concept. Therefore electronic devices (including LLRF control system) will be exposed to ionizing radiation, particularly to a neutron flux generating SEUs in digital circuits. The algorithms implemented in FPGA/DSP should therefore be SEU-tolerant. This paper presents the application of the WCC method to obtain immunity of IQ detection algorithm to SEUs. The VHDL implementation of this algorithm in Xilinx Virtex II Pro FPGA is presented, together with results of simulation proving the algorithm suitability for systems operating in the presence of SEUs.

  4. Evaluation of Stereo Algorithms for Obstacle Detection with Fisheye Lenses

    NASA Astrophysics Data System (ADS)

    Krombach, N.; Droeschel, D.; Behnke, S.

    2015-08-01

    For autonomous navigation of micro aerial vehicles (MAVs), a robust detection of obstacles with onboard sensors is necessary in order to avoid collisions. Cameras have the potential to perceive the surroundings of MAVs for the reconstruction of their 3D structure. We equipped our MAV with two fisheye stereo camera pairs to achieve an omnidirectional field-of-view. Most stereo algorithms are designed for the standard pinhole camera model, though. Hence, the distortion effects of the fisheye lenses must be properly modeled and model parameters must be identified by suitable calibration procedures. In this work, we evaluate the use of real-time stereo algorithms for depth reconstruction from fisheye cameras together with different methods for calibration. In our experiments, we focus on obstacles occurring in urban environments that are hard to detect due to their low diameter or homogeneous texture.

  5. begin{center} MUSIC Algorithms for Rebar Detection

    NASA Astrophysics Data System (ADS)

    Leone, G.; Solimene, R.

    2012-04-01

    In this contribution we consider the problem of detecting and localizing small cross section, with respect to the wavelength, scatterers from their scattered field once a known incident field interrogated the scene where they reside. A pertinent applicative context is rebar detection within concrete pillar. For such a case, scatterers to be detected are represented by rebars themselves or by voids due to their lacking. In both cases, as scatterers have point-like support, a subspace projection method can be conveniently exploited [1]. However, as the field scattered by rebars is stronger than the one due to voids, it is expected that the latter can be difficult to be detected. In order to circumvent this problem, in this contribution we adopt a two-step MUltiple SIgnal Classification (MUSIC) detection algorithm. In particular, the first stage aims at detecting rebars. Once rebar are detected, their positions are exploited to update the Green's function and then a further detection scheme is run to locate voids. However, in this second case, background medium encompasses also the rabars. The analysis is conducted numerically for a simplified two-dimensional scalar scattering geometry. More in detail, as is usual in MUSIC algorithm, a multi-view/multi-static single-frequency configuration is considered [2]. Baratonia, G. Leone, R. Pierri, R. Solimene, "Fault Detection in Grid Scattering by a Time-Reversal MUSIC Approach," Porc. Of ICEAA 2011, Turin, 2011. E. A. Marengo, F. K. Gruber, "Subspace-Based Localization and Inverse Scattering of Multiply Scattering Point Targets," EURASIP Journal on Advances in Signal Processing, 2007, Article ID 17342, 16 pages (2007).

  6. Algorithm for Automated Detection of Edges of Clouds

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.

    2006-01-01

    An algorithm processes cloud-physics data gathered in situ by an aircraft, along with reflectivity data gathered by ground-based radar, to determine whether the aircraft is inside or outside a cloud at a given time. A cloud edge is deemed to be detected when the in/out state changes, subject to a hysteresis constraint. Such determinations are important in continuing research on relationships among lightning, electric charges in clouds, and decay of electric fields with distance from cloud edges.

  7. Geostationary Fire Detection with the Wildfire Automated Biomass Burning Algorithm

    NASA Astrophysics Data System (ADS)

    Hoffman, J.; Schmidt, C. C.; Brunner, J. C.; Prins, E. M.

    2010-12-01

    The Wild Fire Automated Biomass Burning Algorithm (WF_ABBA), developed at the Cooperative Institute for Meteorological Satellite Studies (CIMSS), has a long legacy of operational wildfire detection and characterization. In recent years, applications of geostationary fire detection and characterization data have been expanding. Fires are detected with a contextual algorithm and when the fires meet certain conditions the instantaneous fire size, temperature, and radiative power are calculated and provided in user products. The WF_ABBA has been applied to data from Geostationary Operational Environmental Satellite (GOES)-8 through 15, Meteosat-8/-9, and Multifunction Transport Satellite (MTSAT)-1R/-2. WF_ABBA is also being developed for the upcoming platforms like GOES-R Advanced Baseline Imager (ABI) and other geostationary satellites. Development of the WF_ABBA for GOES-R ABI has focused on adapting the legacy algorithm to the new satellite system, enhancing its capabilities to take advantage of the improvements available from ABI, and addressing user needs. By its nature as a subpixel feature, observation of fire is extraordinarily sensitive to the characteristics of the sensor and this has been a fundamental part of the GOES-R WF_ABBA development work.

  8. Lunar Crescent Detection Based on Image Processing Algorithms

    NASA Astrophysics Data System (ADS)

    Fakhar, Mostafa; Moalem, Peyman; Badri, Mohamad Ali

    2014-11-01

    For many years lunar crescent visibility has been studied by many astronomers. Different criteria have been used to predict and evaluate the visibility status of new Moon crescents. Powerful equipment such as telescopes and binoculars have changed capability of observations. Most of conventional statistical criteria made wrong predictions when new observations (based on modern equipment) were reported. In order to verify such reports and modify criteria, not only previous statistical parameters should be considered but also some new and effective parameters like high magnification, contour effect, low signal to noise, eyestrain and weather conditions should be viewed. In this paper a new method is presented for lunar crescent detection based on processing of lunar crescent images. The method includes two main steps, first, an image processing algorithm that improves signal to noise ratio and detects lunar crescents based on circular Hough transform (CHT). Second using an algorithm based on image histogram processing to detect the crescent visually. Final decision is made by comparing the results of visual and CHT algorithms. In order to evaluate the proposed method, a database, including 31 images are tested. The illustrated method can distinguish and extract the crescent that even the eye can't recognize. Proposed method significantly reduces artifacts, increases SNR and can be used easily by both groups astronomers and who want to develop a new criterion as a reliable method to verify empirical observation.

  9. Edge detection in medical images using a genetic algorithm.

    PubMed

    Gudmundsson, M; El-Kwae, E A; Kabuka, M R

    1998-06-01

    An algorithm is developed that detects well-localized, unfragmented, thin edges in medical images based on optimization of edge configurations using a genetic algorithm (GA). Several enhancements were added to improve the performance of the algorithm over a traditional GA. The edge map is split into connected subregions to reduce the solution space and simplify the problem. The edge-map is then optimized in parallel using incorporated genetic operators that perform transforms on edge structures. Adaptation is used to control operator probabilities based on their participation. The GA was compared to the simulated annealing (SA) approach using ideal and actual medical images from different modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound. Quantitative comparisons were provided based on the Pratt figure of merit and on the cost-function minimization. The detected edges were thin, continuous, and well localized. Most of the basic edge features were detected. Results for different medical image modalities are promising and encourage further investigation to improve the accuracy and experiment with different cost functions and genetic operators. PMID:9735910

  10. A Study of Lane Detection Algorithm for Personal Vehicle

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kazuyuki; Watanabe, Kajiro; Ohkubo, Tomoyuki; Kurihara, Yosuke

    By the word “Personal vehicle”, we mean a simple and lightweight vehicle expected to emerge as personal ground transportation devices. The motorcycle, electric wheelchair, motor-powered bicycle, etc. are examples of the personal vehicle and have been developed as the useful for transportation for a personal use. Recently, a new types of intelligent personal vehicle called the Segway has been developed which is controlled and stabilized by using on-board intelligent multiple sensors. The demand for needs for such personal vehicles are increasing, 1) to enhance human mobility, 2) to support mobility for elderly person, 3) reduction of environmental burdens. Since rapidly growing personal vehicles' market, a number of accidents caused by human error is also increasing. The accidents are caused by it's drive ability. To enhance or support drive ability as well as to prevent accidents, intelligent assistance is necessary. One of most important elemental functions for personal vehicle is robust lane detection. In this paper, we develop a robust lane detection method for personal vehicle at outdoor environments. The proposed lane detection method employing a 360 degree omni directional camera and unique robust image processing algorithm. In order to detect lanes, combination of template matching technique and Hough transform are employed. The validity of proposed lane detection algorithm is confirmed by actual developed vehicle at various type of sunshined outdoor conditions.

  11. Sparsity-based algorithm for detecting faults in rotating machines

    NASA Astrophysics Data System (ADS)

    He, Wangpeng; Ding, Yin; Zi, Yanyang; Selesnick, Ivan W.

    2016-05-01

    This paper addresses the detection of periodic transients in vibration signals so as to detect faults in rotating machines. For this purpose, we present a method to estimate periodic-group-sparse signals in noise. The method is based on the formulation of a convex optimization problem. A fast iterative algorithm is given for its solution. A simulated signal is formulated to verify the performance of the proposed approach for periodic feature extraction. The detection performance of comparative methods is compared with that of the proposed approach via RMSE values and receiver operating characteristic (ROC) curves. Finally, the proposed approach is applied to single fault diagnosis of a locomotive bearing and compound faults diagnosis of motor bearings. The processed results show that the proposed approach can effectively detect and extract the useful features of bearing outer race and inner race defect.

  12. Algorithm for Detecting a Bright Spot in an Image

    NASA Technical Reports Server (NTRS)

    2009-01-01

    An algorithm processes the pixel intensities of a digitized image to detect and locate a circular bright spot, the approximate size of which is known in advance. The algorithm is used to find images of the Sun in cameras aboard the Mars Exploration Rovers. (The images are used in estimating orientations of the Rovers relative to the direction to the Sun.) The algorithm can also be adapted to tracking of circular shaped bright targets in other diverse applications. The first step in the algorithm is to calculate a dark-current ramp a correction necessitated by the scheme that governs the readout of pixel charges in the charge-coupled-device camera in the original Mars Exploration Rover application. In this scheme, the fraction of each frame period during which dark current is accumulated in a given pixel (and, hence, the dark-current contribution to the pixel image-intensity reading) is proportional to the pixel row number. For the purpose of the algorithm, the dark-current contribution to the intensity reading from each pixel is assumed to equal the average of intensity readings from all pixels in the same row, and the factor of proportionality is estimated on the basis of this assumption. Then the product of the row number and the factor of proportionality is subtracted from the reading from each pixel to obtain a dark-current-corrected intensity reading. The next step in the algorithm is to determine the best location, within the overall image, for a window of N N pixels (where N is an odd number) large enough to contain the bright spot of interest plus a small margin. (In the original application, the overall image contains 1,024 by 1,024 pixels, the image of the Sun is about 22 pixels in diameter, and N is chosen to be 29.)

  13. Fast automatic algorithm for bifurcation detection in vascular CTA scans

    NASA Astrophysics Data System (ADS)

    Brozio, Matthias; Gorbunova, Vladlena; Godenschwager, Christian; Beck, Thomas; Bernhardt, Dominik

    2012-02-01

    Endovascular imaging aims at identifying vessels and their branches. Automatic vessel segmentation and bifurcation detection eases both clinical research and routine work. In this article a state of the art bifurcation detection algorithm is developed and applied on vascular computed tomography angiography (CTA) scans to mark the common iliac artery and its branches, the internal and external iliacs. In contrast to other methods our algorithm does not rely on a complete segmentation of a vessel in the 3D volume, but evaluates the cross-sections of the vessel slice by slice. Candidates for vessels are obtained by thresholding, following by 2D connected component labeling and prefiltering by size and position. The remaining candidates are connected in a squared distanced weighted graph. With Dijkstra algorithm the graph is traversed to get candidates for the arteries. We use another set of features considering length and shape of the paths to determine the best candidate and detect the bifurcation. The method was tested on 119 datasets acquired with different CT scanners and varying protocols. Both easy to evaluate datasets with high resolution and no apparent clinical diseases and difficult ones with low resolution, major calcifications, stents or poor contrast between the vessel and surrounding tissue were included. The presented results are promising, in 75.7% of the cases the bifurcation was labeled correctly, and in 82.7% the common artery and one of its branches were assigned correctly. The computation time was on average 0.49 s +/- 0.28 s, close to human interaction time, which makes the algorithm applicable for time-critical applications.

  14. Aberrant Mucin5B expression in lung adenocarcinomas detected by iTRAQ labeling quantitative proteomics and immunohistochemistry

    PubMed Central

    2013-01-01

    Background Lung cancer is the number one cause of cancer-related deaths in the United States and worldwide. The complex protein changes and/or signature of protein expression in lung cancer, particularly in non-small cell lung cancer (NSCLC) has not been well defined. Although several studies have investigated the protein profile in lung cancers, the knowledge is far from complete. Among early studies, mucin5B (MUC5B) has been suggested to play an important role in the tumor progression. MUC5B is the major gel-forming mucin in the airway. In this study, we investigated the overall protein profile and MUC5B expression in lung adenocarcinomas, the most common type of NSCLCs. Methods Lung adenocarcinoma tissue in formalin-fixed paraffin-embedded (FFPE) blocks was collected and microdissected. Peptides from 8 tumors and 8 tumor-matched normal lung tissue were extracted and labeled with 8-channel iTRAQ reagents. The labeled peptides were identified and quantified by LC-MS/MS using an LTQ Orbitrap Velos mass spectrometer. MUC5B expression identified by iTRAQ labeling was further validated using immunohistochemistry (IHC) on tumor tissue microarray (TMA). Results A total of 1288 peptides from 210 proteins were identified and quantified in tumor tissues. Twenty-two proteins showed a greater than 1.5-fold differences between tumor and tumor-matched normal lung tissues. Fifteen proteins, including MUC5B, showed significant changes in tumor tissues. The aberrant expression of MUC5B was further identified in 71.1% of lung adenocarcinomas in the TMA. Discussions A subset of tumor-associated proteins was differentially expressed in lung adenocarcinomas. The differential expression of MUC5B in lung adenocarcinomas suggests its role as a potential biomarker in the detection of adenocarcinomas. PMID:24176033

  15. Detection of deception in structured interviews using sensors and algorithms

    NASA Astrophysics Data System (ADS)

    Cunha, Meredith G.; Clarke, Alissa C.; Martin, Jennifer Z.; Beauregard, Jason R.; Webb, Andrea K.; Hensley, Asher A.; Keshava, Nirmal Q.; Martin, Daniel J.

    2010-04-01

    Draper Laboratory and MRAC have recently completed a comprehensive study to quantitatively evaluate deception detection performance under different interviewing styles. The interviews were performed while multiple physiological waveforms were collected from participants to determine how well automated algorithms can detect deception based upon changes in physiology. We report the results of a multi-factorial experiment with 77 human participants who were deceptive on specific topics during interviews conducted with one of two styles: a forcing style which relies on more coercive or confrontational techniques, or a fostering approach, which relies on open-ended interviewing and elements of a cognitive interview. The interviews were performed in a state-of-the-art facility where multiple sensors simultaneously collect synchronized physiological measurements, including electrodermal response, relative blood pressure, respiration, pupil diameter, and ECG. Features extracted from these waveforms during honest and deceptive intervals were then submitted to a hypothesis test to evaluate their statistical significance. A univariate statistical detection algorithm then assessed the ability to detect deception for different interview configurations. Our paper will explain the protocol and experimental design for this study. Our results will be in terms of statistical significances, effect sizes, and ROC curves and will identify how promising features performed in different interview scenarios.

  16. Oscillation Detection Algorithm Development Summary Report and Test Plan

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    -based modal analysis algorithms have been developed. They include Prony analysis, Regularized Ro-bust Recursive Least Square (R3LS) algorithm, Yule-Walker algorithm, Yule-Walker Spectrum algorithm, and the N4SID algo-rithm. Each has been shown to be effective for certain situations, but not as effective for some other situations. For example, the traditional Prony analysis works well for disturbance data but not for ambient data, while Yule-Walker is designed for ambient data only. Even in an algorithm that works for both disturbance data and ambient data, such as R3LS, latency results from the time window used in the algorithm is an issue in timely estimation of oscillation modes. For ambient data, the time window needs to be longer to accumulate information for a reasonably accurate estimation; while for disturbance data, the time window can be significantly shorter so the latency in estimation can be much less. In addition, adding a known input signal such as noise probing signals can increase the knowledge of system oscillatory properties and thus improve the quality of mode estimation. System situations change over time. Disturbances can occur at any time, and probing signals can be added for a certain time period and then removed. All these observations point to the need to add intelligence to ModeMeter applications. That is, a ModeMeter needs to adaptively select different algorithms and adjust parameters for various situations. This project aims to develop systematic approaches for algorithm selection and parameter adjustment. The very first step is to detect occurrence of oscillations so the algorithm and parameters can be changed accordingly. The proposed oscillation detection approach is based on the signal-noise ratio of measurements.

  17. Incremental refinement of a multi-user-detection algorithm (II)

    NASA Astrophysics Data System (ADS)

    Vollmer, M.; Götze, J.

    2003-05-01

    Multi-user detection is a technique proposed for mobile radio systems based on the CDMA principle, such as the upcoming UMTS. While offering an elegant solution to problems such as intra-cell interference, it demands very significant computational resources. In this paper, we present a high-level approach for reducing the required resources for performing multi-user detection in a 3GPP TDD multi-user system. This approach is based on a displacement representation of the parameters that describe the transmission system, and a generalized Schur algorithm that works on this representation. The Schur algorithm naturally leads to a highly parallel hardware implementation using CORDIC cells. It is shown that this hardware architecture can also be used to compute the initial displacement representation. It is very beneficial to introduce incremental refinement structures into the solution process, both at the algorithmic level and in the individual cells of the hardware architecture. We detail these approximations and present simulation results that confirm their effectiveness.

  18. Runway Safety Monitor Algorithm for Runway Incursion Detection and Alerting

    NASA Technical Reports Server (NTRS)

    Green, David F., Jr.; Jones, Denise R. (Technical Monitor)

    2002-01-01

    The Runway Safety Monitor (RSM) is an algorithm for runway incursion detection and alerting that was developed in support of NASA's Runway Incursion Prevention System (RIPS) research conducted under the NASA Aviation Safety Program's Synthetic Vision System element. The RSM algorithm provides pilots with enhanced situational awareness and warnings of runway incursions in sufficient time to take evasive action and avoid accidents during landings, takeoffs, or taxiing on the runway. The RSM currently runs as a component of the NASA Integrated Display System, an experimental avionics software system for terminal area and surface operations. However, the RSM algorithm can be implemented as a separate program to run on any aircraft with traffic data link capability. The report documents the RSM software and describes in detail how RSM performs runway incursion detection and alerting functions for NASA RIPS. The report also describes the RIPS flight tests conducted at the Dallas-Ft Worth International Airport (DFW) during September and October of 2000, and the RSM performance results and lessons learned from those flight tests.

  19. Detection of cracks in shafts with the Approximated Entropy algorithm

    NASA Astrophysics Data System (ADS)

    Sampaio, Diego Luchesi; Nicoletti, Rodrigo

    2016-05-01

    The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.

  20. Correction of Distributed Optical Aberrations

    SciTech Connect

    Baker, K; Olivier, S; Carrano, C; Phillion, D

    2006-02-12

    The objective of this project was to demonstrate the use of multiple distributed deformable mirrors (DMs) to improve the performance of optical systems with distributed aberrations. This concept is expected to provide dramatic improvement in the optical performance of systems in applications where the aberrations are distributed along the optical path or within the instrument itself. Our approach used multiple actuated DMs distributed to match the aberration distribution. The project developed the algorithms necessary to determine the required corrections and simulate the performance of these multiple DM systems.

  1. Firefly Algorithm in detection of TEC seismo-ionospheric anomalies

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, Mehdi

    2015-07-01

    Anomaly detection in time series of different earthquake precursors is an essential introduction to create an early warning system with an allowable uncertainty. Since these time series are more often non linear, complex and massive, therefore the applied predictor method should be able to detect the discord patterns from a large data in a short time. This study acknowledges Firefly Algorithm (FA) as a simple and robust predictor to detect the TEC (Total Electron Content) seismo-ionospheric anomalies around the time of the some powerful earthquakes including Chile (27 February 2010), Varzeghan (11 August 2012) and Saravan (16 April 2013). Outstanding anomalies were observed 7 and 5 days before the Chile and Varzeghan earthquakes, respectively and also 3 and 8 days prior to the Saravan earthquake.

  2. A morphological algorithm for improving radio-frequency interference detection

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; van de Gronde, J. J.; Roerdink, J. B. T. M.

    2012-03-01

    A technique is described that is used to improve the detection of radio-frequency interference in astronomical radio observatories. It is applied on a two-dimensional interference mask after regular detection in the time-frequency domain with existing techniques. The scale-invariant rank (SIR) operator is defined, which is a one-dimensional mathematical morphology technique that can be used to find adjacent intervals in the time or frequency domain that are likely to be affected by RFI. The technique might also be applicable in other areas in which morphological scale-invariant behaviour is desired, such as source detection. A new algorithm is described, that is shown to perform quite well, has linear time complexity and is fast enough to be applied in modern high resolution observatories. It is used in the default pipeline of the LOFAR observatory.

  3. CS based confocal microwave imaging algorithm for breast cancer detection.

    PubMed

    Sun, Y P; Zhang, S; Cui, Z; Qu, L L

    2016-04-29

    Based on compressive sensing (CS) technology, a high resolution confocal microwave imaging algorithm is proposed for breast cancer detection. With the exploitation of the spatial sparsity of the target space, the proposed image reconstruction problem is cast within the framework of CS and solved by the sparse constraint optimization. The effectiveness and validity of the proposed CS imaging method is verified by the full wave synthetic data from numerical breast phantom using finite-difference time-domain (FDTD) method. The imaging results have shown that the proposed imaging scheme can improve the imaging quality while significantly reducing the amount of data measurements and collection time when compared to the traditional delay-and-sum imaging algorithm. PMID:27177106

  4. Fast Particle Pair Detection Algorithms for Particle Simulations

    NASA Astrophysics Data System (ADS)

    Iwai, T.; Hong, C.-W.; Greil, P.

    New algorithms with O(N) complexity have been developed for fast particle-pair detections in particle simulations like the discrete element method (DEM) and molecular dynamic (MD). They exhibit robustness against broad particle size distributions when compared with conventional boxing methods. Almost similar calculation speeds are achieved at particle size distributions from is mono-size to 1:10 while the linked-cell method results in calculations more than 20 times. The basic algorithm, level-boxing, uses the variable search range according to each particle. The advanced method, multi-level boxing, employs multiple cell layers to reduce the particle size discrepancy. Another method, indexed-level boxing, reduces the size of cell arrays by introducing the hash procedure to access the cell array, and is effective for sparse particle systems with a large number of particles.

  5. A novel dynamical community detection algorithm based on weighting scheme

    NASA Astrophysics Data System (ADS)

    Li, Ju; Yu, Kai; Hu, Ke

    2015-12-01

    Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.

  6. Comparing Several Algorithms for Change Detection of Wetland

    NASA Astrophysics Data System (ADS)

    Yan, F.; Zhang, S.; Chang, L.

    2015-12-01

    As "the kidneys of the landscape" and "ecological supermarkets", wetland plays an important role in ecological equilibrium and environmental protection.Therefore, it is of great significance to understand the dynamic changes of the wetland. Nowadays, many index and many methods have been used in dynamic Monitoring of Wetland. However, there are no single method and no single index are adapted to detect dynamic change of wetland all over the world. In this paper, three digital change detection algorithms are applied to 2005 and 2010 Landsat Thematic Mapper (TM) images of a portion of the Northeast China to detect wetland dynamic between the two dates. The change vector analysis method (CVA) uses 6 bands of TM images to detect wetland dynamic. The tassled cap transformation is used to create three change images (change in brightness, greenness, and wetness). A new method--- Comprehensive Change Detection Method (CCDM) is introduced to detect forest dynamic change. The CCDM integrates spectral-based change detection algorithms including a Multi-Index Integrated Change Analysis (MIICA) model and a novel change model called Zone, which extracts change information from two Landsat image pairs. The MIICA model is the core module of the change detection strategy and uses four spectral indices (differenced Normalized Burn Ratio (dNBR), differenced Normalized Difference Vegetation Index (dNDVI), the Change Vector (CV) and a new index called the Relative Change Vector Maximum (RCVMAX)) to obtain the changes that occurred between two image dates. The CCDM also includes a knowledge-based system, which uses critical information on historical and current land cover conditions and trends and the likelihood of land cover change, to combine the changes from MIICA and Zone. Related test proved that CCDM method is simple, easy to operate, widely applicable, and capable of capturing a variety of natural and anthropogenic disturbances potentially associated with land cover changes on

  7. EEG seizure detection and prediction algorithms: a survey

    NASA Astrophysics Data System (ADS)

    Alotaiby, Turkey N.; Alshebeili, Saleh A.; Alshawi, Tariq; Ahmad, Ishtiaq; Abd El-Samie, Fathi E.

    2014-12-01

    Epilepsy patients experience challenges in daily life due to precautions they have to take in order to cope with this condition. When a seizure occurs, it might cause injuries or endanger the life of the patients or others, especially when they are using heavy machinery, e.g., deriving cars. Studies of epilepsy often rely on electroencephalogram (EEG) signals in order to analyze the behavior of the brain during seizures. Locating the seizure period in EEG recordings manually is difficult and time consuming; one often needs to skim through tens or even hundreds of hours of EEG recordings. Therefore, automatic detection of such an activity is of great importance. Another potential usage of EEG signal analysis is in the prediction of epileptic activities before they occur, as this will enable the patients (and caregivers) to take appropriate precautions. In this paper, we first present an overview of seizure detection and prediction problem and provide insights on the challenges in this area. Second, we cover some of the state-of-the-art seizure detection and prediction algorithms and provide comparison between these algorithms. Finally, we conclude with future research directions and open problems in this topic.

  8. Improved Bat algorithm for the detection of myocardial infarction.

    PubMed

    Kora, Padmavathi; Kalva, Sri Ramakrishna

    2015-01-01

    The medical practitioners study the electrical activity of the human heart in order to detect heart diseases from the electrocardiogram (ECG) of the heart patients. A myocardial infarction (MI) or heart attack is a heart disease, that occurs when there is a block (blood clot) in the pathway of one or more coronary blood vessels (arteries) that supply blood to the heart muscle. The abnormalities in the heart can be identified by the changes in the ECG signal. The first step in the detection of MI is Preprocessing of ECGs which removes noise by using filters. Feature extraction is the next key process in detecting the changes in the ECG signals. This paper presents a method for extracting key features from each cardiac beat using Improved Bat algorithm. Using this algorithm best features are extracted, then these best (reduced) features are applied to the input of the neural network classifier. It has been observed that the performance of the classifier is improved with the help of the optimized features. PMID:26558169

  9. Road Detection by Neural and Genetic Algorithm in Urban Environment

    NASA Astrophysics Data System (ADS)

    Barsi, A.

    2012-07-01

    In the urban object detection challenge organized by the ISPRS WG III/4 high geometric and radiometric resolution aerial images about Vaihingen/Stuttgart, Germany are distributed. The acquired data set contains optical false color, near infrared images and airborne laserscanning data. The presented research focused exclusively on the optical image, so the elevation information was ignored. The road detection procedure has been built up of two main phases: a segmentation done by neural networks and a compilation made by genetic algorithms. The applied neural networks were support vector machines with radial basis kernel function and self-organizing maps with hexagonal network topology and Euclidean distance function for neighborhood management. The neural techniques have been compared by hyperbox classifier, known from the statistical image classification practice. The compilation of the segmentation is realized by a novel application of the common genetic algorithm and by differential evolution technique. The genes were implemented to detect the road elements by evaluating a special binary fitness function. The results have proven that the evolutional technique can automatically find major road segments.

  10. Efficient implementations of hyperspectral chemical-detection algorithms

    NASA Astrophysics Data System (ADS)

    Brett, Cory J. C.; DiPietro, Robert S.; Manolakis, Dimitris G.; Ingle, Vinay K.

    2013-10-01

    Many military and civilian applications depend on the ability to remotely sense chemical clouds using hyperspectral imagers, from detecting small but lethal concentrations of chemical warfare agents to mapping plumes in the aftermath of natural disasters. Real-time operation is critical in these applications but becomes diffcult to achieve as the number of chemicals we search for increases. In this paper, we present efficient CPU and GPU implementations of matched-filter based algorithms so that real-time operation can be maintained with higher chemical-signature counts. The optimized C++ implementations show between 3x and 9x speedup over vectorized MATLAB implementations.

  11. NASA airborne radar wind shear detection algorithm and the detection of wet microbursts in the vicinity of Orlando, Florida

    NASA Technical Reports Server (NTRS)

    Britt, Charles L.; Bracalente, Emedio M.

    1992-01-01

    The algorithms used in the NASA experimental wind shear radar system for detection, characterization, and determination of windshear hazard are discussed. The performance of the algorithms in the detection of wet microbursts near Orlando is presented. Various suggested algorithms that are currently being evaluated using the flight test results from Denver and Orlando are reviewed.

  12. Geolocation Assessment Algorithm for CALIPSO Using Coastline Detection

    NASA Technical Reports Server (NTRS)

    Currey, J. Chris

    2002-01-01

    Cloud-Aerosol Lidar Infrared Pathfinder Satellite Observations (CALIPSO) is a joint satellite mission between NASA and the French space agency CNES. The investigation will gather long-term, global cloud and aerosol optical and physical properties to improve climate models. The CALIPSO spacecraft is scheduled to launch in 2004 into a 98.2 inclination, 705 km circular orbit approximately 3 minutes behind the Aqua spacecraft. The payload consists of a two-wavelength polarization-sensitive lidar, and two passive imagers operating in the visible (0.645 mm) and infrared (8.7 - 12.0 mm) spectral regions. The imagers are nadir viewing and co-aligned with the lidar. Earth viewing measurements are geolocated to the Earth fixed coordinate system using satellite ephemeris, Earth rotation and geoid, and instrument pointing data. The coastline detection algorithm will assess the accuracy of the CALIPSO geolocation process by analyzing Wide Field Camera (WFC) visible ocean land boundaries. Processing space-time coincident MODIS and WFC scenes with the coastline algorithm will help verify the co-registration requirement with Moderate Resolution Imaging Spectrometer (MODIS) data. This paper quantifies the accuracy of the coastline geolocation assessment algorithm.

  13. Detecting disease-causing genes by LASSO-Patternsearch algorithm

    PubMed Central

    Shi, Weiliang; Lee, Kristine E; Wahba, Grace

    2007-01-01

    The Genetic Analysis Workshop 15 Problem 3 simulated rheumatoid arthritis data set provided 100 replicates of simulated single-nucleotide polymorphism (SNP) and covariate data sets for 1500 families with an affected sib pair and 2000 controls, modeled after real rheumatoid arthritis data. The data generation model included nine unobserved trait loci, most of which have one or more of the generated SNPs associated with them. These data sets provide an ideal experimental test bed for evaluating new and old algorithms for selecting SNPs and covariates that can separate cases from controls, because the cases and controls are known as well as the identities of the trait loci. LASSO-Patternsearch is a new multi-step algorithm with a LASSO-type penalized likelihood method at its core specifically designed to detect and model interactions between important predictor variables. In this article the original LASSO-Patternsearch algorithm is modified to handle the large number of SNPs plus covariates. We start with a screen step within the framework of parametric logistic regression. The patterns that survived the screen step were further selected by a penalized logistic regression with the LASSO penalty. And finally, a parametric logistic regression model were built on the patterns that survived the LASSO step. In our analysis of Genetic Analysis Workshop 15 Problem 3 data we have identified most of the associated SNPs and relevant covariates. Upon using the model as a classifier, very competitive error rates were obtained. PMID:18466561

  14. Application of multistatic inversion algorithms to landmine detection

    NASA Astrophysics Data System (ADS)

    Gürbüz, Ali Cafer; Counts, Tegan; Kim, Kangwook; McClellan, James H.; Scott, Waymond R., Jr.

    2006-05-01

    Multi-static ground-penetrating radar (GPR) uses an array of antennas to conduct a number of bistatic operations simultaneously. The multi-static GPR is used to obtain more information on the target of interest using angular diversity. An entirely computer controlled, multi-static GPR consisting of a linear array of six resistively-loaded vee dipoles (RVDs), a network analyzer, and a microwave switch matrix was developed to investigate the potential of multi-static inversion algorithms. The performance of a multi-static inversion algorithm is evaluated for targets buried in clean sand, targets buried under the ground covered by rocks, and targets held above the ground (in the air) using styrofoam supports. A synthetic-aperture, multi-static, time-domain GPR imaging algorithm is extended from conventional mono-static back-projection techniques and used to process the data. Good results are obtained for the clean surface and air targets; however, for targets buried under rocks, only the deeply buried targets could be accurately detected and located.

  15. Algorithm of semicircular laser spot detection based on circle fitting

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzhou; Xu, Ruihua; Hu, Bingliang

    2013-07-01

    In order to obtain the exact center of an asymmetrical and semicircular aperture laser spot, a method for laser spot detection method based on circle fitting was proposed in this paper, threshold of laser spot image was segmented by the method of gray morphology algorithm, rough edge of laser spot was detected in both vertical and horizontal direction, short arcs and isolated edge points were deleted by contour growing, the best circle contour was obtained by iterative fitting and the final standard round was fitted in the end. The experimental results show that the precision of the method is obviously better than the gravity model method being used in the traditional large laser automatic alignment system. The accuracy of the method to achieve asymmetrical and semicircular laser spot center meets the requirements of the system.

  16. Vibration-based damage detection algorithm for WTT structures

    NASA Astrophysics Data System (ADS)

    Nguyen, Tuan-Cuong; Kim, Tae-Hwan; Choi, Sang-Hoon; Ryu, Joo-Young; Kim, Jeong-Tae

    2016-04-01

    In this paper, the integrity of a wind turbine tower (WTT) structure is nondestructively estimated using its vibration responses. Firstly, a damage detection algorithm using changes in modal characteristics to predict damage locations and severities in structures is outlined. Secondly, a finite element (FE) model based on a real WTT structure is established by using a commercial software, Midas FEA. Thirdly, forced vibration tests are performed on the FE model of the WTT structure under various damage scenarios. The changes in modal parameters such as natural frequencies and mode shapes are examined for damage monitoring in the structure. Finally, the feasibility of the vibration-based damage detection method is numerically verified by predicting locations and severities of the damage in the FE model of the WTT structure.

  17. A new detection algorithm for microcalcification clusters in mammographic screening

    NASA Astrophysics Data System (ADS)

    Xie, Weiying; Ma, Yide; Li, Yunsong

    2015-05-01

    A novel approach for microcalcification clusters detection is proposed. At the first time, we make a short analysis of mammographic images with microcalcification lesions to confirm these lesions have much greater gray values than normal regions. After summarizing the specific feature of microcalcification clusters in mammographic screening, we make more focus on preprocessing step including eliminating the background, image enhancement and eliminating the pectoral muscle. In detail, Chan-Vese Model is used for eliminating background. Then, we do the application of combining morphology method and edge detection method. After the AND operation and Sobel filter, we use Hough Transform, it can be seen that the result have outperformed for eliminating the pectoral muscle which is approximately the gray of microcalcification. Additionally, the enhancement step is achieved by morphology. We make effort on mammographic image preprocessing to achieve lower computational complexity. As well known, it is difficult to robustly achieve mammograms analysis due to low contrast between normal and lesion tissues, there are also much noise in such images. After a serious preprocessing algorithm, a method based on blob detection is performed to microcalcification clusters according their specific features. The proposed algorithm has employed Laplace operator to improve Difference of Gaussians (DoG) function in terms of low contrast images. A preliminary evaluation of the proposed method performs on a known public database namely MIAS, rather than synthetic images. The comparison experiments and Cohen's kappa coefficients all demonstrate that our proposed approach can potentially obtain better microcalcification clusters detection results in terms of accuracy, sensitivity and specificity.

  18. Data detection algorithms for multiplexed quantum dot encoding.

    PubMed

    Goss, Kelly C; Messier, Geoff G; Potter, Mike E

    2012-02-27

    A group of quantum dots can be designed to have a unique spectral emission by varying the size of the quantum dots (wavelength) and number of quantum dots (intensity). This technique has been previously proposed for biological tags and object identification. The potential of this system lies in the ability to have a large number of distinguishable wavelengths and intensity levels. This paper presents a communications system model for MxQDs including the interference between neighbouring QD colours and detector noise. An analytical model of the signal-to-noise ratio of a Charge-Coupled Device (CCD) spectrometer is presented and confirmed with experimental results. We then apply a communications system perspective and propose data detection algorithms that increase the readability of the quantum dots tags. It is demonstrated that multiplexed quantum dot barcodes can be read with 99.7% accuracy using the proposed data detection algorithms in a system with 6 colours and 6 intensity values resulting in 46,655 unique spectral codes. PMID:22418382

  19. Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.

    NASA Astrophysics Data System (ADS)

    Giridhar, K.

    The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal

  20. Identification of Genomic Aberrations in Cancer Subclones from Heterogeneous Tumor Samples.

    PubMed

    Xia, Hong; Liu, Yuanning; Wang, Minghui; Li, Ao

    2015-01-01

    Tumor samples are usually heterogeneous, containing admixture of more than one kind of tumor subclones. Studies of genomic aberrations from heterogeneous tumor data are hindered by the mixed signal of tumor subclone cells. Most of the existing algorithms cannot distinguish contributions of different subclones from the measured single nucleotide polymorphism (SNP) array signals, which may cause erroneous estimation of genomic aberrations. Here, we have introduced a computational method, Cancer Heterogeneity Analysis from SNP-array Experiments (CHASE), to automatically detect subclone proportions and genomic aberrations from heterogeneous tumor samples. Our method is based on HMM, and incorporates EM algorithm to build a statistical model for modeling mixed signal of multiple tumor subclones. We tested the proposed approach on simulated datasets and two real datasets, and the results show that the proposed method can efficiently estimate tumor subclone proportions and recovery the genomic aberrations. PMID:26357278

  1. Bio Inspired Swarm Algorithm for Tumor Detection in Digital Mammogram

    NASA Astrophysics Data System (ADS)

    Dheeba, J.; Selvi, Tamil

    Microcalcification clusters in mammograms is the significant early sign of breast cancer. Individual clusters are difficult to detect and hence an automatic computer aided mechanism will help the radiologist in detecting the microcalcification clusters in an easy and efficient way. This paper presents a new classification approach for detection of microcalcification in digital mammogram using particle swarm optimization algorithm (PSO) based clustering technique. Fuzzy C-means clustering technique, well defined for clustering data sets are used in combination with the PSO. We adopt the particle swarm optimization to search the cluster center in the arbitrary data set automatically. PSO can search the best solution from the probability option of the Social-only model and Cognition-only model. This method is quite simple and valid, and it can avoid the minimum local value. The proposed classification approach is applied to a database of 322 dense mammographic images, originating from the MIAS database. Results shows that the proposed PSO-FCM approach gives better detection performance compared to conventional approaches.

  2. Particle filter-based track before detect algorithms

    NASA Astrophysics Data System (ADS)

    Boers, Yvo; Driessen, Hans

    2003-12-01

    In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.

  3. Particle filter-based track before detect algorithms

    NASA Astrophysics Data System (ADS)

    Boers, Yvo; Driessen, Hans

    2004-01-01

    In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.

  4. Dynamic multiple thresholding breast boundary detection algorithm for mammograms

    SciTech Connect

    Wu, Yi-Ta; Zhou Chuan; Chan, Heang-Ping; Paramagul, Chintana; Hadjiiski, Lubomir M.; Daly, Caroline Plowden; Douglas, Julie A.; Zhang Yiheng; Sahiner, Berkman; Shi Jiazheng; Wei Jun

    2010-01-15

    Purpose: Automated detection of breast boundary is one of the fundamental steps for computer-aided analysis of mammograms. In this study, the authors developed a new dynamic multiple thresholding based breast boundary (MTBB) detection method for digitized mammograms. Methods: A large data set of 716 screen-film mammograms (442 CC view and 274 MLO view) obtained from consecutive cases of an Institutional Review Board approved project were used. An experienced breast radiologist manually traced the breast boundary on each digitized image using a graphical interface to provide a reference standard. The initial breast boundary (MTBB-Initial) was obtained by dynamically adapting the threshold to the gray level range in local regions of the breast periphery. The initial breast boundary was then refined by using gradient information from horizontal and vertical Sobel filtering to obtain the final breast boundary (MTBB-Final). The accuracy of the breast boundary detection algorithm was evaluated by comparison with the reference standard using three performance metrics: The Hausdorff distance (HDist), the average minimum Euclidean distance (AMinDist), and the area overlap measure (AOM). Results: In comparison with the authors' previously developed gradient-based breast boundary (GBB) algorithm, it was found that 68%, 85%, and 94% of images had HDist errors less than 6 pixels (4.8 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 89%, 90%, and 96% of images had AMinDist errors less than 1.5 pixels (1.2 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 96%, 98%, and 99% of images had AOM values larger than 0.9 for GBB, MTBB-Initial, and MTBB-Final, respectively. The improvement by the MTBB-Final method was statistically significant for all the evaluation measures by the Wilcoxon signed rank test (p<0.0001). Conclusions: The MTBB approach that combined dynamic multiple thresholding and gradient information provided better performance than the breast boundary

  5. Algorithms for lineaments detection in processing of multispectral images

    NASA Astrophysics Data System (ADS)

    Borisova, D.; Jelev, G.; Atanassov, V.; Koprinkova-Hristova, Petia; Alexiev, K.

    2014-10-01

    Satellite remote sensing is a universal tool to investigate the different areas of Earth and environmental sciences. The advancement of the implementation capabilities of the optoelectronic devices which are long-term-tested in the laboratory and the field and are mounted on-board of the remote sensing platforms further improves the capability of instruments to acquire information about the Earth and its resources in global, regional and local scales. With the start of new high-spatial and spectral resolution satellite and aircraft imagery new applications for large-scale mapping and monitoring becomes possible. The integration with Geographic Information Systems (GIS) allows a synergistic processing of the multi-source spatial and spectral data. Here we present the results of a joint project DFNI I01/8 funded by the Bulgarian Science Fund focused on the algorithms of the preprocessing and the processing spectral data by using the methods of the corrections and of the visual and automatic interpretation. The objects of this study are lineaments. The lineaments are basically the line features on the earth's surface which are a sign of the geological structures. The geological lineaments usually appear on the multispectral images like lines or edges or linear shapes which is the result of the color variations of the surface structures. The basic geometry of a line is orientation, length and curve. The detection of the geological lineaments is an important operation in the exploration for mineral deposits, in the investigation of active fault patterns, in the prospecting of water resources, in the protecting people, etc. In this study the integrated approach for the detecting of the lineaments is applied. It combines together the methods of the visual interpretation of various geological and geographical indications in the multispectral satellite images, the application of the spatial analysis in GIS and the automatic processing of the multispectral images by Canny

  6. Cable Damage Detection System and Algorithms Using Time Domain Reflectometry

    SciTech Connect

    Clark, G A; Robbins, C L; Wade, K A; Souza, P R

    2009-03-24

    This report describes the hardware system and the set of algorithms we have developed for detecting damage in cables for the Advanced Development and Process Technologies (ADAPT) Program. This program is part of the W80 Life Extension Program (LEP). The system could be generalized for application to other systems in the future. Critical cables can undergo various types of damage (e.g. short circuits, open circuits, punctures, compression) that manifest as changes in the dielectric/impedance properties of the cables. For our specific problem, only one end of the cable is accessible, and no exemplars of actual damage are available. This work addresses the detection of dielectric/impedance anomalies in transient time domain reflectometry (TDR) measurements on the cables. The approach is to interrogate the cable using time domain reflectometry (TDR) techniques, in which a known pulse is inserted into the cable, and reflections from the cable are measured. The key operating principle is that any important cable damage will manifest itself as an electrical impedance discontinuity that can be measured in the TDR response signal. Machine learning classification algorithms are effectively eliminated from consideration, because only a small number of cables is available for testing; so a sufficient sample size is not attainable. Nonetheless, a key requirement is to achieve very high probability of detection and very low probability of false alarm. The approach is to compare TDR signals from possibly damaged cables to signals or an empirical model derived from reference cables that are known to be undamaged. This requires that the TDR signals are reasonably repeatable from test to test on the same cable, and from cable to cable. Empirical studies show that the repeatability issue is the 'long pole in the tent' for damage detection, because it is has been difficult to achieve reasonable repeatability. This one factor dominated the project. The two-step model-based approach is

  7. Close correlation of copy number aberrations detected by next-generation sequencing with results from routine cytogenetics in acute myeloid leukemia.

    PubMed

    Vosberg, Sebastian; Herold, Tobias; Hartmann, Luise; Neumann, Martin; Opatz, Sabrina; Metzeler, Klaus H; Schneider, Stephanie; Graf, Alexander; Krebs, Stefan; Blum, Helmut; Baldus, Claudia D; Hiddemann, Wolfgang; Spiekermann, Karsten; Bohlander, Stefan K; Mansmann, Ulrich; Greif, Philipp A

    2016-07-01

    High throughput sequencing approaches, including the analysis of exomes or gene panels, are widely used and established to detect tumor-specific sequence variants such as point mutations or small insertions/deletions. Beyond single nucleotide resolution, sequencing data also contain information on changes in sequence coverage between samples and thus allow the detection of somatic copy number alterations (CNAs) representing gain or loss of genomic material in tumor cells arising from aneuploidy, amplifications, or deletions. To test the feasibility of CNA detection in sequencing data we analyzed the exomes of 25 paired leukemia/remission samples from acute myeloid leukemia (AML) patients with well-defined chromosomal aberrations, detected by conventional chromosomal analysis and/or molecular cytogenetics assays. Thereby, we were able to confirm chromosomal aberrations including trisomies, monosomies, and partial chromosomal deletions in 20 out of 25 samples. Comparison of CNA detection using exome, custom gene panel, and SNP array analysis showed equivalent results in five patients with variable clone size. Gene panel analysis of AML samples without matched germline control samples resulted in confirmation of cytogenetic findings in 18 out of 22 cases. In all cases with discordant findings, small clone size (<33%) was limiting for CNA detection. We detected CNAs consistent with cytogenetics in 83% of AML samples including highly correlated clone size estimation (R = 0.85), while six out of 65 cytogenetically normal AML samples exhibited CNAs apparently missed by routine cytogenetics. Overall, our results show that high throughput targeted sequencing data can be reliably used to detect copy number changes in the dominant AML clone. © 2016 Wiley Periodicals, Inc. PMID:27015608

  8. A Novel Algorithm for Cycle Slip Detection and Repair

    NASA Astrophysics Data System (ADS)

    Sezen, U.; Arikan, F.

    2012-04-01

    Accurate and reliable estimation of ionospheric parameters are very important for correct functioning of communication, navigation and positioning satellite systems. In recent years, dual-frequency GPS receivers are widely used for estimation of Total Electron Content (TEC), which is defined as the line integral of the electron density along a ray path. Since both electron density and TEC are functions of solar, geomagnetic, gravitational and seismic activity, any disturbance along the ray path can be detected using GPS receiver observables. It is observed that, with the development of recent sophisticated receivers, disruptions due to the receiver antenna, hardware or outside obstructions are minimized. Most of the observed sudden disturbances are signal phase lock losses due to ionosphere. These sudden phase shifts are named as cycle slips and if not corrected, they may lead to positioning errors or incorrect TEC estimates. There are many methods in the literature that deal with cycle slips and their repairs, yet these methods are not matured to detect all kinds of cycle slips. Most algorithms require double differencing, and/or complicated Kalman Filters, Wavelet transforms, Neural Network models, and integration of external INS systems. In this study, we propose a fast and efficient algorithm for identifying the cycle slips on individual observables, classifying them for future investigations and finally repairing them for more accurate and reliable TEC estimates. The algorithm traces the pseudorange and phase observables and computes the geometry free combinations of L4 and P4. The sudden disturbances on L1, L2, P1, C1 and P2 are classified and noted for further use. Most of the cases, the disruptions are on phase observables, yet for a few occasions, a sudden disturbance is also observed on pseudorange observables. The algorithm, then, checks the epoch section where P4 exists continually. When a disruption on L1 or L2 occurs, it becomes evident on L4. When P4

  9. Design of infrasound-detection system via adaptive LMSTDE algorithm

    NASA Technical Reports Server (NTRS)

    Khalaf, C. S.; Stoughton, J. W.

    1984-01-01

    A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.

  10. New morphology independent detection and segmentation algorithm for galaxies

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Mohammad; Ichikawa, Takashi

    2015-08-01

    Due to their dynamic history, galaxy shapes can display a very rich and diverse distribution of shapes, with a large number of galaxies being classified as irregular in the local universe. As we look into higher redshifts, the fractions of such galaxies and their prominence in terms of mass apprently increases with more massive galaxies showing irregular profiles that fade very slowly into the image noise. The accurate study of such objects therefore needs detection and photometry techniques that impose negligible constraints on the shapes and profiles of their targets. We introduce a noise-based, non-parametric technique to detect normal, irregular or clumpy galaxies and their structure in noise. Noise based and non parametric imply that it imposes negligible constraints on the properties of the targets and that it employs no regression analysis or fittings. This technique is based on the fact that an object's signal will contiguously augment the noise inundating it. Detection is performed independent of the sky value. The detections are classified as true or false using the ambient noise as a reference, allowing a purity level of 0.86 as compared to 0.27 for SExtractor when a completeness of 1 is desired for a sample of extremely faint mock galaxy profiles. Defining the accuracy of detection as the difference of the measured sky with the known background of mock images, an order of magnitude less biased sky (and thus galaxy photometry) measurements is achieved. A non-parametric approach to defining substructure over a detected region is also introduced. NoiseChisel is our software implementation of this new technique. Contrary to the existing signal-based approach to detection, in its various implementations, signal related parameters such as the image point spread function or known object shapes and models are irrelevant here, which makes this algorithm very useful in astrophysical applications such as detection, photometry or morphological analysis of nebulous

  11. Jitter Estimation Algorithms for Detection of Pathological Voices

    NASA Astrophysics Data System (ADS)

    Silva, Dárcio G.; Oliveira, Luís C.; Andrea, Mário

    2009-12-01

    This work is focused on the evaluation of different methods to estimate the amount of jitter present in speech signals. The jitter value is a measure of the irregularity of a quasiperiodic signal and is a good indicator of the presence of pathologies in the larynx such as vocal fold nodules or a vocal fold polyp. Given the irregular nature of the speech signal, each jitter estimation algorithm relies on its own model making a direct comparison of the results very difficult. For this reason, the evaluation of the different jitter estimation methods was target on their ability to detect pathological voices. Two databases were used for this evaluation: a subset of the MEEI database and a smaller database acquired in the scope of this work. The results showed that there were significant differences in the performance of the algorithms being evaluated. Surprisingly, in the largest database the best results were not achieved with the commonly used relative jitter, measured as a percentage of the glottal cycle, but with absolute jitter values measured in microseconds. Also, the new proposed measure for jitter, LocJitt, performed in general is equal to or better than the commonly used tools of MDVP and Praat.

  12. On the Formal Verification of Conflict Detection Algorithms

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Butler, Ricky W.; Carreno, Victor A.; Dowek, Gilles

    2001-01-01

    Safety assessment of new air traffic management systems is a main issue for civil aviation authorities. Standard techniques such as testing and simulation have serious limitations in new systems that are significantly more autonomous than the older ones. In this paper, we present an innovative approach, based on formal verification, for establishing the correctness of conflict detection systems. Fundamental to our approach is the concept of trajectory, which is a continuous path in the x-y plane constrained by physical laws and operational requirements. From the Model of trajectories, we extract, and formally prove, high level properties that can serve as a framework to analyze conflict scenarios. We use the Airborne Information for Lateral Spacing (AILS) alerting algorithm as a case study of our approach.

  13. Physiology-based diagnosis algorithm for arteriovenous fistula stenosis detection.

    PubMed

    Yeih, Dong-Feng; Wang, Yuh-Shyang; Huang, Yi-Chun; Chen, Ming-Fong; Lu, Shey-Shi

    2014-01-01

    In this paper, a diagnosis algorithm for arteriovenous fistula (AVF) stenosis is developed based on auscultatory features, signal processing, and machine learning. The AVF sound signals are recorded by electronic stethoscopes at pre-defined positions before and after percutaneous transluminal angioplasty (PTA) treatment. Several new signal features of stenosis are identified and quantified, and the physiological explanations for these features are provided. Utilizing support vector machine method, an average of 90% two-fold cross-validation hit-rate can be obtained, with angiography as the gold standard. This offers a non-invasive easy-to-use diagnostic method for medical staff or even patients themselves for early detection of AVF stenosis. PMID:25571021

  14. Automatic Detection and Quantification of WBCs and RBCs Using Iterative Structured Circle Detection Algorithm

    PubMed Central

    Alomari, Yazan M.; Zaharatul Azma, Raja

    2014-01-01

    Segmentation and counting of blood cells are considered as an important step that helps to extract features to diagnose some specific diseases like malaria or leukemia. The manual counting of white blood cells (WBCs) and red blood cells (RBCs) in microscopic images is an extremely tedious, time consuming, and inaccurate process. Automatic analysis will allow hematologist experts to perform faster and more accurately. The proposed method uses an iterative structured circle detection algorithm for the segmentation and counting of WBCs and RBCs. The separation of WBCs from RBCs was achieved by thresholding, and specific preprocessing steps were developed for each cell type. Counting was performed for each image using the proposed method based on modified circle detection, which automatically counted the cells. Several modifications were made to the basic (RCD) algorithm to solve the initialization problem, detecting irregular circles (cells), selecting the optimal circle from the candidate circles, determining the number of iterations in a fully dynamic way to enhance algorithm detection, and running time. The validation method used to determine segmentation accuracy was a quantitative analysis that included Precision, Recall, and F-measurement tests. The average accuracy of the proposed method was 95.3% for RBCs and 98.4% for WBCs. PMID:24803955

  15. The Shortwave (SW) Clear-Sky Detection and Fitting Algorithm: Algorithm Operational Details and Explanations

    SciTech Connect

    Long, CN; Gaustad, KL

    2004-01-31

    This document describes some specifics of the algorithm for detecting clear skies and fitting clear-sky shortwave (SW) functions described in Long and Ackerman (2000). This algorithm forms the basis of the ARM SW FLUX ANAL 1Long VAP. In the Atmospheric Radiation Measurement (ARM) case, the value added procedures (VAP) can be described as having three parts: a “front end,” a “black box,” and a “back end.” The “front end” handles the file management of the processing, what range of data files to process in the run, which configuration file to use for each site, extracting the data from the ARM NetCDF files into an ASCII format for the code to process, etc. The “back end” produces ARM-format NetCDF files of the output and other file management. The “black box” is the processing code(s), and is what is discussed in this document. Details on the “front” and “back” ends of the ARM VAP are presented elsewhere.

  16. Diffraction tomographic signal processing algorithms for tunnel detection

    SciTech Connect

    Witten, A.J.

    1993-08-01

    Signal processing algorithms have been developed for wave based imaging using diffraction tomography. The basis for this image reconstruction procedure is the generalized projection slice theorem (GPST) which, for homogeneous waves, is an analytic relationship between the spatial Fourier transform of the acquired data and the spatial Fourier transform of the spatial profile (object function) of the object being imaged. Imaging within geophysical diffraction tomography when only homogeneous waves are considered can then be accomplished by inversion of the GPST using standard numerical techniques. In an attenuating background medium or when eddy currents or static fields are considered, a generalized GPST can be derived that involves both real and complex spatial frequencies. In this case, direct Fourier inversion is not possible because of the presence of the complex frequencies. Although direct inversion and, hence, complete imaging is not possible for such cases, the generalized CPST`S can be used to analytically shift the location of data templates matched to specified targets and these templates can, in turn, be correlated with acquired data to detect and estimate the location of the specified targets. Since GPST`s are used directly in the detection problem, there is no need to numerically invert the intergal transform of the object function. For this reason, target detection can be accomplished in a computationally efficient manner independent of the type of measurement or background geologic conditions. A number of GPST`s are derived and the use of GPST`s for both imaging and detection of subsurface voids is demonstrated in several recent applications.

  17. SURF IA Conflict Detection and Resolution Algorithm Evaluation

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Barker, Glover D.

    2012-01-01

    The Enhanced Traffic Situational Awareness on the Airport Surface with Indications and Alerts (SURF IA) algorithm was evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. SURF IA is designed to increase flight crew situation awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the SURF IA algorithm under various runway scenarios, multiple levels of conflict detection and resolution (CD&R) system equipage, and various levels of horizontal position accuracy. This paper gives an overview of the SURF IA concept, simulation study, and results. Runway incursions are a serious aviation safety hazard. As such, the FAA is committed to reducing the severity, number, and rate of runway incursions by implementing a combination of guidance, education, outreach, training, technology, infrastructure, and risk identification and mitigation initiatives [1]. Progress has been made in reducing the number of serious incursions - from a high of 67 in Fiscal Year (FY) 2000 to 6 in FY2010. However, the rate of all incursions has risen steadily over recent years - from a rate of 12.3 incursions per million operations in FY2005 to a rate of 18.9 incursions per million operations in FY2010 [1, 2]. The National Transportation Safety Board (NTSB) also considers runway incursions to be a serious aviation safety hazard, listing runway incursion prevention as one of their most wanted transportation safety improvements [3]. The NTSB recommends that immediate warning of probable collisions/incursions be given directly to flight crews in the cockpit [4].

  18. Detection of structural damage using novelty detection algorithm under variational environmental and operational conditions

    NASA Astrophysics Data System (ADS)

    El Mountassir, M.; Yaacoubi, S.; Dahmene, F.

    2015-07-01

    Novelty detection is a widely used algorithm in different fields of study due to its capabilities to recognize any kind of abnormalities in a specific process in order to ensure better working in normal conditions. In the context of Structural Health Monitoring (SHM), this method is utilized as damage detection technique because the presence of defects can be considered as abnormal to the structure. Nevertheless, the performance of such a method could be jeopardized if the structure is operating in harsh environmental and operational conditions (EOCs). In this paper, novelty detection statistical technique is used to investigate the detection of damages under various EOCs. Experiments were conducted with different scenarios: damage sizes and shapes. EOCs effects were simulated by adding stochastic noise to the collected experimental data. Different levels of noise were studied to determine the accuracy and the performance of the proposed method.

  19. Innovation sequence application to aircraft sensor fault detection: comparison of checking covariance matrix algorithms

    PubMed

    Caliskan; Hajiyev

    2000-01-01

    In this paper, the algorithms verifying the covariance matrix of the Kalman filter innovation sequence are compared with respect to detected minimum fault rate and detection time. Four algorithms are dealt with; the algorithm verifying the trace of the covariance matrix of the innovation sequence, the algorithm verifying the sum of all elements of the inverse covariance matrix of the innovation sequence, the optimal algorithm verifying the ratio of two quadratic forms of which matrices are theoretic and selected covariance matrices of Kalman filter innovation sequence, and the algorithm verifying the generalized variance of the covariance matrix of the innovation sequence. The algorithms are implemented for longitudinal dynamics of an aircraft to detect sensor faults, and some suggestions are given on the use of the algorithms in flight control systems. PMID:10826285

  20. The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.

  1. Time series change detection: Algorithms for land cover change

    NASA Astrophysics Data System (ADS)

    Boriah, Shyam

    can be used for decision making and policy planning purposes. In particular, previous change detection studies have primarily relied on examining differences between two or more satellite images acquired on different dates. Thus, a technological solution that detects global land cover change using high temporal resolution time series data will represent a paradigm-shift in the field of land cover change studies. To realize these ambitious goals, a number of computational challenges in spatio-temporal data mining need to be addressed. Specifically, analysis and discovery approaches need to be cognizant of climate and ecosystem data characteristics such as seasonality, non-stationarity/inter-region variability, multi-scale nature, spatio-temporal autocorrelation, high-dimensionality and massive data size. This dissertation, a step in that direction, translates earth science challenges to computer science problems, and provides computational solutions to address these problems. In particular, three key technical capabilities are developed: (1) Algorithms for time series change detection that are effective and can scale up to handle the large size of earth science data; (2) Change detection algorithms that can handle large numbers of missing and noisy values present in satellite data sets; and (3) Spatio-temporal analysis techniques to identify the scale and scope of disturbance events.

  2. Spectral analysis algorithm for material detection from multispectral imagery

    NASA Astrophysics Data System (ADS)

    Racine, Joseph K.

    2011-06-01

    Material detection from multi-spectral imagery is critical to numerous geospatial applications. However, given the limited number of channels from various air and space-borne imaging sensors, coupled with varying illumination conditions, material-specific detection rules tend to generate large numbers of false positives. This paper will describe a novel approach that uses various band ratios (for example, [Blue + Green]/Red) to identify targets-of-interest, regardless of the illumination conditions and position of the sensor relative to the target. The approach uses a physics-based spectral model to estimate the observed channel-weighted radiance based on solar irradiance, atmospheric transmission, reflectivity of the target-of-interest and the spectral weighting functions of the sensor's channels. The observed channelweighted radiance is then converted to the expected channel pixel value by the channel-specific conversion factor. With each channel's pixel values estimated, the algorithm goes through a process to find which band ratio values show the least amount of variance, despite varying irradiance spectra and atmospheric absorption. The band ratios with the least amount of variance are then used to identify the target-of-interest in an image file. To determine the expected false alarm rate, the same band ratios are evaluated against a library of background materials using the same calculation method for determining the target-of-interest's channel pixel values. Testing of this approach against ground-truth imagery, with as few as four channels, has shown a high rate of success in identifying targets-of-interest, while maintaining low false alarm rates.

  3. Airport Traffic Conflict Detection and Resolution Algorithm Evaluation

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Otero, Sharon D.; Barker, Glover D.

    2012-01-01

    A conflict detection and resolution (CD&R) concept for the terminal maneuvering area (TMA) was evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. The CD&R concept is being designed to enhance surface situation awareness and provide cockpit alerts of potential conflicts during runway, taxi, and low altitude air-to-air operations. The purpose of the study was to evaluate the performance of aircraft-based CD&R algorithms in the TMA, as a function of surveillance accuracy. This paper gives an overview of the CD&R concept, simulation study, and results. The Next Generation Air Transportation System (NextGen) concept for the year 2025 and beyond envisions the movement of large numbers of people and goods in a safe, efficient, and reliable manner [1]. NextGen will remove many of the constraints in the current air transportation system, support a wider range of operations, and provide an overall system capacity up to three times that of current operating levels. Emerging NextGen operational concepts [2], such as four-dimensional trajectory based airborne and surface operations, equivalent visual operations, and super density arrival and departure operations, require a different approach to air traffic management and as a result, a dramatic shift in the tasks, roles, and responsibilities for the flight deck and air traffic control (ATC) to ensure a safe, sustainable air transportation system.

  4. The infrared moving object detection and security detection related algorithms based on W4 and frame difference

    NASA Astrophysics Data System (ADS)

    Yin, Jiale; Liu, Lei; Li, He; Liu, Qiankun

    2016-07-01

    This paper presents the infrared moving object detection and security detection related algorithms in video surveillance based on the classical W4 and frame difference algorithm. Classical W4 algorithm is one of the powerful background subtraction algorithms applying to infrared images which can accurately, integrally and quickly detect moving object. However, the classical W4 algorithm can only overcome the deficiency in the slight movement of background. The error will become bigger and bigger for long-term surveillance system since the background model is unchanged once established. In this paper, we present the detection algorithm based on the classical W4 and frame difference. It cannot only overcome the shortcoming of falsely detecting because of state mutations from background, but also eliminate holes caused by frame difference. Based on these we further design various security detection related algorithms such as illegal intrusion alarm, illegal persistence alarm and illegal displacement alarm. We compare our method with the classical W4, frame difference, and other state-of-the-art methods. Experiments detailed in this paper show the method proposed in this paper outperforms the classical W4 and frame difference and serves well for the security detection related algorithms.

  5. Evaluation of chronic lymphocytic leukemia by oligonucleotide-based microarray analysis uncovers novel aberrations not detected by FISH or cytogenetic analysis

    PubMed Central

    2011-01-01

    Background Cytogenetic evaluation is a key component of the diagnosis and prognosis of chronic lymphocytic leukemia (CLL). We performed oligonucleotide-based comparative genomic hybridization microarray analysis on 34 samples with CLL and known abnormal karyotypes previously determined by cytogenetics and/or fluorescence in situ hybridization (FISH). Results Using a custom designed microarray that targets >1800 genes involved in hematologic disease and other malignancies, we identified additional cryptic aberrations and novel findings in 59% of cases. These included gains and losses of genes associated with cell cycle regulation, apoptosis and susceptibility loci on 3p21.31, 5q35.2q35.3, 10q23.31q23.33, 11q22.3, and 22q11.23. Conclusions Our results show that microarray analysis will detect known aberrations, including microscopic and cryptic alterations. In addition, novel genomic changes will be uncovered that may become important prognostic predictors or treatment targets for CLL in the future. PMID:22087757

  6. Algorithm of weak edge detection based on the Nilpotent minimum fusion

    NASA Astrophysics Data System (ADS)

    Sun, Genyun; Zhang, Aizhu; Han, Xujun

    2011-11-01

    To overcome the shortcoming in traditional edge detection, such as the losing of weak edges and the too rough detected edges, a new edge detection method is proposed in this paper. The new algorithm is based on the Nilpotent minimum fusion. First of all, based on the space fuzzy relation of weak edges, the algorithm makes decision fusion to improve the structure of weak edges by using the Nilpotent minimum operator. Secondly, detect edges based on the fusion results. As a result, the weak edges are detected. Experiments on a variety of weak edge images show that the new algorithm can actually overcome the shortcoming in traditional edge detection, for the results are much better than traditional methods. On one hand, some of the weak edges of complex images, such as medical images, are detected. On the other hand, the edges detected by the new algorithm are thinner.

  7. Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry

    2005-01-01

    Reliable detection of non-traversable hazards is a key requirement for off-road autonomous navigation. A detailed description of each obstacle detection algorithm and their performance on the surveyed obstacle course is presented in this paper.

  8. A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun

    2014-11-01

    In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.

  9. Performance of Dispersed Fringe Sensor in the Presence of Segmented Mirror Aberrations: Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Shi, Fang; Basinger, Scott A.; Redding, David C.

    2006-01-01

    Dispersed Fringe Sensing (DFS) is an efficient and robust method for coarse phasing of a segmented primary mirror such as the James Webb Space Telescope (JWST). In this paper, modeling and simulations are used to study the effect of segmented mirror aberrations on the fringe image, DFS signals and DFS detection accuracy. The study has shown due to the pixilation spatial filter effect from DFS signal extraction the effect of wavefront error is reduced and DFS algorithm will be more robust against wavefront aberration by using multi-trace DFS approach. We also studied the JWST Dispersed Hartmann Sensor (DHS) performance in presence of wavefront aberrations caused by the gravity sag and we use the scaled gravity sag to explore the JWST DHS performance relationship with the level of the wavefront aberration. This also includes the effect from line-of-sight jitter.

  10. An Effective Intrusion Detection Algorithm Based on Improved Semi-supervised Fuzzy Clustering

    NASA Astrophysics Data System (ADS)

    Li, Xueyong; Zhang, Baojian; Sun, Jiaxia; Yan, Shitao

    An algorithm for intrusion detection based on improved evolutionary semi- supervised fuzzy clustering is proposed which is suited for situation that gaining labeled data is more difficulty than unlabeled data in intrusion detection systems. The algorithm requires a small number of labeled data only and a large number of unlabeled data and class labels information provided by labeled data is used to guide the evolution process of each fuzzy partition on unlabeled data, which plays the role of chromosome. This algorithm can deal with fuzzy label, uneasily plunges locally optima and is suited to implement on parallel architecture. Experiments show that the algorithm can improve classification accuracy and has high detection efficiency.

  11. Comparative study of adaptive-noise-cancellation algorithms for intrusion detection systems

    SciTech Connect

    Claassen, J.P.; Patterson, M.M.

    1981-01-01

    Some intrusion detection systems are susceptible to nonstationary noise resulting in frequent nuisance alarms and poor detection when the noise is present. Adaptive inverse filtering for single channel systems and adaptive noise cancellation for two channel systems have both demonstrated good potential in removing correlated noise components prior detection. For such noise susceptible systems the suitability of a noise reduction algorithm must be established in a trade-off study weighing algorithm complexity against performance. The performance characteristics of several distinct classes of algorithms are established through comparative computer studies using real signals. The relative merits of the different algorithms are discussed in the light of the nature of intruder and noise signals.

  12. Generalized Viterbi algorithms for error detection with convolutional codes

    NASA Astrophysics Data System (ADS)

    Seshadri, N.; Sundberg, C.-E. W.

    Presented are two generalized Viterbi algorithms (GVAs) for the decoding of convolutional codes. They are a parallel algorithm that simultaneously identifies the L best estimates of the transmitted sequence, and a serial algorithm that identifies the lth best estimate using the knowledge about the previously found l-1 estimates. These algorithms are applied to combined speech and channel coding systems, concatenated codes, trellis-coded modulation, partial response (continuous-phase modulation), and hybrid ARQ (automatic repeat request) schemes. As an example, for a concatenated code more than 2 dB is gained by the use of the GVA with L = 3 over the Viterbi algorithm for block error rates less than 10-2. The channel is a Rayleigh fading channel.

  13. Research of adaptive threshold edge detection algorithm based on statistics canny operator

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Wang, Huaisuo; Huang, Hua

    2015-12-01

    The traditional Canny operator cannot get the optimal threshold in different scene, on this foundation, an improved Canny edge detection algorithm based on adaptive threshold is proposed. The result of the experiment pictures indicate that the improved algorithm can get responsible threshold, and has the better accuracy and precision in the edge detection.

  14. The Transit Detection Algorithm DST and its application to CoRoT and Kepler data

    NASA Astrophysics Data System (ADS)

    Cabrera, J.; Rauer, H.; Erikson, A.; Csizmadia, S.

    2011-10-01

    Transit detection algorithms are mathematical tools used to detect the presence of planets in the photometric data of transit surveys. Space missions are exploring the parameter space of transit surveys towards small planets where classical algorithms do not perform optimally, either due to the low signal to noise ratio of the signal or to its non-periodic characteristics. We present an algorithm addressing these challenges and its performance in an application to CoRoT and Kepler data.

  15. Fast algorithm for probabilistic bone edge detection (FAPBED)

    NASA Astrophysics Data System (ADS)

    Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.

    2005-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean

  16. Face detection in complex background based on Adaboost algorithm and YCbCr skin color model

    NASA Astrophysics Data System (ADS)

    Ge, Wei; Han, Chunling; Quan, Wei

    2015-12-01

    Face detection is a fundamental and important research theme in the topic of Pattern Recognition and Computer Vision. Now, remarkable fruits have been achieved. Among these methods, statistics based methods hold a dominant position. In this paper, Adaboost algorithm based on Haar-like features is used to detect faces in complex background. The method combining YCbCr skin model detection and Adaboost is researched, the skin detection method is used to validate the detection results obtained by Adaboost algorithm. It overcomes false detection problem by Adaboost. Experimental results show that nearly all non-face areas are removed, and improve the detection rate.

  17. Distributed learning automata-based algorithm for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Khomami, Mohammad Mehdi Daliri; Rezvanian, Alireza; Meybodi, Mohammad Reza

    2016-03-01

    Community structure is an important and universal topological property of many complex networks such as social and information networks. The detection of communities of a network is a significant technique for understanding the structure and function of networks. In this paper, we propose an algorithm based on distributed learning automata for community detection (DLACD) in complex networks. In the proposed algorithm, each vertex of network is equipped with a learning automation. According to the cooperation among network of learning automata and updating action probabilities of each automaton, the algorithm interactively tries to identify high-density local communities. The performance of the proposed algorithm is investigated through a number of simulations on popular synthetic and real networks. Experimental results in comparison with popular community detection algorithms such as walk trap, Danon greedy optimization, Fuzzy community detection, Multi-resolution community detection and label propagation demonstrated the superiority of DLACD in terms of modularity, NMI, performance, min-max-cut and coverage.

  18. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    PubMed Central

    Liu, Pengyu; Jia, Kebin

    2013-01-01

    A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495

  19. Morphology and Immunohistochemistry for 2SC and FH Aid in Detection of Fumarate Hydratase Gene Aberrations in Uterine Leiomyomas From Young Patients.

    PubMed

    Joseph, Nancy M; Solomon, David A; Frizzell, Norma; Rabban, Joseph T; Zaloudek, Charles; Garg, Karuna

    2015-11-01

    Hereditary leiomyomatosis and renal cell carcinoma (HLRCC) syndrome is an autosomal dominant syndrome that results from mutations in the fumarate hydratase (FH) gene. Patients with HLRCC are at risk for smooth muscle tumors of the uterus and skin as well as renal tumors. The renal cell carcinomas associated with HLRCC are usually high stage at presentation, aggressive, and have poor clinical outcomes. Therefore these patients and family members would benefit from early identification and appropriate surveillance. In small studies, HLRCC-associated uterine leiomyomas have been noted to display characteristic morphologic features including eosinophilic cytoplasmic inclusions, prominent eosinophilic nucleoli, and perinucleolar halos. Limited data suggest that positive staining for 2-succinocysteine (2SC) and loss of staining for FH by immunohistochemistry (IHC) can help with identification of HLRCC. The aim of this study was to evaluate the ability of morphology and IHC for FH and 2SC to help identify HLRCC in young patients with uterine smooth muscle tumors. We identified 194 evaluable uterine leiomyomas from women less than 40 years of age. We found FH gene aberrations by mutation analysis in 5 cases, a 2.6% incidence. Of these 5 cases, 4 displayed the characteristic morphologic features outlined above, whereas 1 did not. All 5 tumors with FH gene abnormalities showed positive staining for 2SC, whereas no FH gene aberrations were found in the 2SC-negative cases. Loss of FH staining was seen in 2 of the 5 cases, 1 with frameshift mutation and the other with homozygous deletion, whereas the remaining 3 cases with missense FH gene mutations were FH positive. Our study shows that morphologic features can be helpful for detection of HLRCC in uterine leiomyomas, although they may not be present in every case. IHC for 2SC and FH can be helpful: presence of positive staining for 2SC is sensitive and specific for detection of FH gene aberrations, whereas loss of staining for

  20. Detection of Local/Regional Events in Kuwait Using Next-Generation Detection Algorithms

    SciTech Connect

    Gok, M. Rengin; Al-Jerri, Farra; Dodge, Douglas; Al-Enezi, Abdullah; Hauk, Terri; Mellors, R.

    2014-12-10

    Seismic networks around the world use conventional triggering algorithms to detect seismic signals in order to locate local/regional seismic events. Kuwait National Seismological Network (KNSN) of Kuwait Institute of Scientific Research (KISR) is operating seven broad-band and short-period three-component stations in Kuwait. The network is equipped with Nanometrics digitizers and uses Antelope and Guralp acquisition software for processing and archiving the data. In this study, we selected 10 days of archived hourly-segmented continuous data of five stations (Figure 1) and 250 days of continuous recording at MIB. For the temporary deployment our selection criteria was based on KNSN catalog intensity for the period of time we test the method. An autonomous event detection and clustering framework is employed to test a more complete catalog of this short period of time. The goal is to illustrate the effectiveness of the technique and pursue the framework for longer period of time.

  1. Parallelization of exoplanets detection algorithms based on field rotation; example of the MOODS algorithm for SPHERE

    NASA Astrophysics Data System (ADS)

    Mattei, D.; Smith, I.; Ferrari, A.; Carbillet, M.

    2010-10-01

    Post-processing for exoplanet detection using direct imaging requires large data cubes and/or sophisticated signal processing technics. For alt-azimuthal mounts, a projection effect called field rotation makes the potential planet rotate in a known manner on the set of images. For ground based telescopes that use extreme adaptive optics and advanced coronagraphy, technics based on field rotation are already broadly used and still under progress. In most such technics, for a given initial position of the planet the planet intensity estimate is a linear function of the set of images. However, due to field rotation the modified instrumental response applied is not shift invariant like usual linear filters. Testing all possible initial positions is therefore very time-consuming. To reduce the time process, we propose to deal with each subset of initial positions computed on a different machine using parallelization programming. In particular, the MOODS algorithm dedicated to the VLT-SPHERE instrument, that estimates jointly the light contributions of the star and the potential exoplanet, is parallelized on the Observatoire de la Cote d'Azur cluster. Different parallelization methods (OpenMP, MPI, Jobs Array) have been elaborated for the initial MOODS code and compared to each other. The one finally chosen splits the initial positions on the processors available by accounting at best for the different constraints of the cluster structure: memory, job submission queues, number of available CPUs, cluster average load. At the end, a standard set of images is satisfactorily processed in a few hours instead of a few days.

  2. A biomimetic algorithm for the improved detection of microarray features

    NASA Astrophysics Data System (ADS)

    Nicolau, Dan V., Jr.; Nicolau, Dan V.; Maini, Philip K.

    2007-02-01

    One the major difficulties of microarray technology relate to the processing of large and - importantly - error-loaded images of the dots on the chip surface. Whatever the source of these errors, those obtained in the first stage of data acquisition - segmentation - are passed down to the subsequent processes, with deleterious results. As it has been demonstrated recently that biological systems have evolved algorithms that are mathematically efficient, this contribution attempts to test an algorithm that mimics a bacterial-"patented" algorithm for the search of available space and nutrients to find, "zero-in" and eventually delimitate the features existent on the microarray surface.

  3. Comparative study of texture detection and classification algorithms

    NASA Astrophysics Data System (ADS)

    Koltsov, P. P.

    2011-08-01

    A description and results of application of the computer system PETRA (performance evaluation of texture recognition algorithms) are given. This system is designed for the comparative study of texture analysis algorithms; it includes a database of textured images and a collection of software implementations of texture analysis algorithms. The functional capabilities of the system are illustrated using texture classification examples. Test examples are taken from the Brodatz album, MeasTech database, and a set of aerospace images. Results of a comparative evaluation of five well-known texture analysis methods are described—Gabor filters, Laws masks, ring/wedge filters, gray-level cooccurrence matrices (GLCMs), and autoregression image model.

  4. Rocketdyne Safety Algorithm: Space Shuttle Main Engine Fault Detection

    NASA Technical Reports Server (NTRS)

    Norman, Arnold M., Jr.

    1994-01-01

    The Rocketdyne Safety Algorithm (RSA) has been developed to the point of use on the TTBE at MSFC on Task 4 of LeRC contract NAS3-25884. This document contains a description of the work performed, the results of the nominal test of the major anomaly test cases and a table of the resulting cutoff times, a plot of the RSA value vs. time for each anomaly case, a logic flow description of the algorithm, the algorithm code, and a development plan for future efforts.

  5. Characterizing interplanetary shocks for development and optimization of an automated solar wind shock detection algorithm

    NASA Astrophysics Data System (ADS)

    Cash, M. D.; Wrobel, J. S.; Cosentino, K. C.; Reinard, A. A.

    2014-06-01

    Human evaluation of solar wind data for interplanetary (IP) shock identification relies on both heuristics and pattern recognition, with the former lending itself to algorithmic representation and automation. Such detection algorithms can potentially alert forecasters of approaching shocks, providing increased warning of subsequent geomagnetic storms. However, capturing shocks with an algorithmic treatment alone is challenging, as past and present work demonstrates. We present a statistical analysis of 209 IP shocks observed at L1, and we use this information to optimize a set of shock identification criteria for use with an automated solar wind shock detection algorithm. In order to specify ranges for the threshold values used in our algorithm, we quantify discontinuities in the solar wind density, velocity, temperature, and magnetic field magnitude by analyzing 8 years of IP shocks detected by the SWEPAM and MAG instruments aboard the ACE spacecraft. Although automatic shock detection algorithms have previously been developed, in this paper we conduct a methodical optimization to refine shock identification criteria and present the optimal performance of this and similar approaches. We compute forecast skill scores for over 10,000 permutations of our shock detection criteria in order to identify the set of threshold values that yield optimal forecast skill scores. We then compare our results to previous automatic shock detection algorithms using a standard data set, and our optimized algorithm shows improvements in the reliability of automated shock detection.

  6. Evolutionary Algorithms Approach to the Solution of Damage Detection Problems

    NASA Astrophysics Data System (ADS)

    Salazar Pinto, Pedro Yoajim; Begambre, Oscar

    2010-09-01

    In this work is proposed a new Self-Configured Hybrid Algorithm by combining the Particle Swarm Optimization (PSO) and a Genetic Algorithm (GA). The aim of the proposed strategy is to increase the stability and accuracy of the search. The central idea is the concept of Guide Particle, this particle (the best PSO global in each generation) transmits its information to a particle of the following PSO generation, which is controlled by the GA. Thus, the proposed hybrid has an elitism feature that improves its performance and guarantees the convergence of the procedure. In different test carried out in benchmark functions, reported in the international literature, a better performance in stability and accuracy was observed; therefore the new algorithm was used to identify damage in a simple supported beam using modal data. Finally, it is worth noting that the algorithm is independent of the initial definition of heuristic parameters.

  7. Stride search: A general algorithm for storm detection in high resolution climate data

    SciTech Connect

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.

  8. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGESBeta

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  9. A general-purpose contact detection algorithm for nonlinear structural analysis codes

    SciTech Connect

    Heinstein, M.W.; Attaway, S.W.; Swegle, J.W.; Mello, F.J.

    1993-05-01

    A new contact detection algorithm has been developed to address difficulties associated with the numerical simulation of contact in nonlinear finite element structural analysis codes. Problems including accurate and efficient detection of contact for self-contacting surfaces, tearing and eroding surfaces, and multi-body impact are addressed. The proposed algorithm is portable between dynamic and quasi-static codes and can efficiently model contact between a variety of finite element types including shells, bricks, beams and particles. The algorithm is composed of (1) a location strategy that uses a global search to decide which slave nodes are in proximity to a master surface and (2) an accurate detailed contact check that uses the projected motions of both master surface and slave node. In this report, currently used contact detection algorithms and their associated difficulties are discussed. Then the proposed algorithm and how it addresses these problems is described. Finally, the capability of the new algorithm is illustrated with several example problems.

  10. A novel algorithm for real-time adaptive signal detection and identification

    SciTech Connect

    Sleefe, G.E.; Ladd, M.D.; Gallegos, D.E.; Sicking, C.W.; Erteza, I.A.

    1998-04-01

    This paper describes a novel digital signal processing algorithm for adaptively detecting and identifying signals buried in noise. The algorithm continually computes and updates the long-term statistics and spectral characteristics of the background noise. Using this noise model, a set of adaptive thresholds and matched digital filters are implemented to enhance and detect signals that are buried in the noise. The algorithm furthermore automatically suppresses coherent noise sources and adapts to time-varying signal conditions. Signal detection is performed in both the time-domain and the frequency-domain, thereby permitting the detection of both broad-band transients and narrow-band signals. The detection algorithm also provides for the computation of important signal features such as amplitude, timing, and phase information. Signal identification is achieved through a combination of frequency-domain template matching and spectral peak picking. The algorithm described herein is well suited for real-time implementation on digital signal processing hardware. This paper presents the theory of the adaptive algorithm, provides an algorithmic block diagram, and demonstrate its implementation and performance with real-world data. The computational efficiency of the algorithm is demonstrated through benchmarks on specific DSP hardware. The applications for this algorithm, which range from vibration analysis to real-time image processing, are also discussed.

  11. A novel evaluation metric based on visual perception for moving target detection algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Liu, Lei; Cui, Minjie; Li, He

    2016-05-01

    Traditional performance evaluation index for moving target detection algorithm, whose each index's emphasis is different when it is used to evaluate the performance of the moving target detection algorithm, is inconvenient for people to make an evaluation of the performance of algorithm comprehensively and objectively. Particularly, when the detection results of different algorithms have the same number of the foreground point and the background point, the algorithm's each traditional index is the same, and we can't use the traditional index to compare the performance of the moving target detection algorithms, which is the disadvantage of traditional evaluation index that takes pixel as a unit when calculating the index. To solve this problem, combining with the feature of human's visual perception system, this paper presents a new evaluation index-Visual Fluctuation (VF) based on the principle of image block to evaluate the performance of moving target detection algorithm. Experiments showed that the new evaluation index based on the visual perception makes up for the deficiency of traditional one, and the calculation results are not only in accordance with visual perception of human, but also evaluate the performance of the moving target detection algorithm more objectively.

  12. Sensor failure detection and isolation in flexible structures using the eigensystem realization algorithm

    NASA Astrophysics Data System (ADS)

    Zimmerman, David C.; Lyde, Terri L.

    Sensor failure detection and isolation (FDI) for flexible structures is approached from a system realization perspective. Instead of using hardware or analytical model redundancy, system realization is utilized to provide an experimental model based redundancy. The FDI algorithm utilizes the eigensystem realization algorithm to determine a minimum-order state space realization of the structure in the presence of noisy measurements. The FDI algorithm utilizes statistical comparisons of successive realizations to detect and isolate the failed sensor component. Due to the nature in which the FDI algorithm is formulated, it is also possible to classify the failure mode of the sensor. Results are presented using both numerically simulated and actual experimental data.

  13. Detection of aberrant hippocampal mossy fiber connections: Ex vivo mesoscale diffusion MRI and microtractography with histological validation in a patient with uncontrolled temporal lobe epilepsy

    PubMed Central

    Hitchens, T. Kevin; Liu, Jessie R.; Richardson, R. Mark

    2015-01-01

    Abstract Understanding the neurobiology and functional connectivity of hippocampal structures is essential for improving the treatment of mesial temporal lobe epilepsy. At the macroscale, in vivo MRI often reveals hippocampal atrophy and decreased fractional anisotropy, whereas at the microscopic scale, there frequently is evidence of neuronal loss and gliosis. Mossy fiber sprouting in the dentate gyrus (DG), with evidence of glutamatergic synapses in the stratum moleculare (SM) putatively originating from granule cell neurons, may also be observed. This aberrant connection between the DG and SM could produce a reverberant excitatory circuit. However, this hypothesis cannot easily be evaluated using macroscopic or microscopic techniques. We here demonstrate that the ex vivo mesoscopic MRI of surgically excised hippocampi can bridge the explanatory and analytical gap between the macro‐ and microscopic scale. Specifically, diffusion‐ and T2‐weighted MRI can be integrated to visualize a cytoarchitecture that is akin to immunohistochemistry. An appropriate spatial resolution to discern individual cell layers can then be established. Processing of diffusion tensor images using tractography detects extra‐ and intrahippocampal connections, hence providing a unique systems view of the hippocampus and its connected regions. Here, this approach suggests that there is indeed an aberrant connection between the DG and SM, supporting the sprouting hypothesis of a reverberant excitatory network. Mesoscopic ex vivo MR imaging hence provides an exciting new avenue to study hippocampi from treatment‐resistant patients and allows exploration of existing hypotheses, as well as the development of new treatment strategies based on these novel insights. Hum Brain Mapp 37:780–795, 2016. © 2015 Wiley Periodicals, Inc. PMID:26611565

  14. An infrared small target detection algorithm based on high-speed local contrast method

    NASA Astrophysics Data System (ADS)

    Cui, Zheng; Yang, Jingli; Jiang, Shouda; Li, Junbao

    2016-05-01

    Small-target detection in infrared imagery with a complex background is always an important task in remote sensing fields. It is important to improve the detection capabilities such as detection rate, false alarm rate, and speed. However, current algorithms usually improve one or two of the detection capabilities while sacrificing the other. In this letter, an Infrared (IR) small target detection algorithm with two layers inspired by Human Visual System (HVS) is proposed to balance those detection capabilities. The first layer uses high speed simplified local contrast method to select significant information. And the second layer uses machine learning classifier to separate targets from background clutters. Experimental results show the proposed algorithm pursue good performance in detection rate, false alarm rate and speed simultaneously.

  15. Crater detection, classification and contextual information extraction in lunar images using a novel algorithm

    NASA Astrophysics Data System (ADS)

    Vijayan, S.; Vani, K.; Sanjeevi, S.

    2013-09-01

    This study presents the development and implementation of an algorithm for automatic detection, classification and contextual information such as ejecta and the status of degradation of the lunar craters using SELENE panchromatic images. This algorithm works by a three-step process; first, the algorithm detects the simple lunar craters and classifies them into round/flat-floor using the structural profile pattern. Second, it extracts contextual information (ejecta) and notifies their presence if any, and associates it to the corresponding crater using the role of adjacency rule and the Markov random field theory. Finally, the algorithm examines each of the detected craters and assesses its state of degradation using the intensity variation over the crater edge. We applied the algorithm to 16 technically demanding test sites, which were chosen in a manner to represent all possible lunar surface conditions. Crater detection algorithm evaluation was carried out by means of manual analysis for their accuracy in detection, classification, ejecta and degraded-state identification along with a detailed qualitative assessment. The manual analysis depicts that the results are in agreement with the detection, while the overall statistical results reveal the detection performance as: Q ∼ 75% and precision ∼0.83. The results of detection and classification reveal that the simple lunar craters are dominated by the round-floor type rather than flat-floor type. In addition, the results also depicts that the lunar surface is predominant with sub-kilometer craters of lesser depth.

  16. Scale-space point spread function based framework to boost infrared target detection algorithms

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2016-07-01

    Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.

  17. Test Generation Algorithm for Fault Detection of Analog Circuits Based on Extreme Learning Machine

    PubMed Central

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin; Ren, Xuelong

    2014-01-01

    This paper proposes a novel test generation algorithm based on extreme learning machine (ELM), and such algorithm is cost-effective and low-risk for analog device under test (DUT). This method uses test patterns derived from the test generation algorithm to stimulate DUT, and then samples output responses of the DUT for fault classification and detection. The novel ELM-based test generation algorithm proposed in this paper contains mainly three aspects of innovation. Firstly, this algorithm saves time efficiently by classifying response space with ELM. Secondly, this algorithm can avoid reduced test precision efficiently in case of reduction of the number of impulse-response samples. Thirdly, a new process of test signal generator and a test structure in test generation algorithm are presented, and both of them are very simple. Finally, the abovementioned improvement and functioning are confirmed in experiments. PMID:25610458

  18. Test generation algorithm for fault detection of analog circuits based on extreme learning machine.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin; Ren, Xuelong

    2014-01-01

    This paper proposes a novel test generation algorithm based on extreme learning machine (ELM), and such algorithm is cost-effective and low-risk for analog device under test (DUT). This method uses test patterns derived from the test generation algorithm to stimulate DUT, and then samples output responses of the DUT for fault classification and detection. The novel ELM-based test generation algorithm proposed in this paper contains mainly three aspects of innovation. Firstly, this algorithm saves time efficiently by classifying response space with ELM. Secondly, this algorithm can avoid reduced test precision efficiently in case of reduction of the number of impulse-response samples. Thirdly, a new process of test signal generator and a test structure in test generation algorithm are presented, and both of them are very simple. Finally, the abovementioned improvement and functioning are confirmed in experiments. PMID:25610458

  19. A new algorithm CNM-Centrality of detecting communities based on node centrality

    NASA Astrophysics Data System (ADS)

    Hu, Fang; Liu, Yuhua

    2016-03-01

    The discovery and analysis of community structure in complex networks is a hot issue in recent years. In this paper, based on the fast greedy clustering algorithm CNM with the thought of local search, the introduction of the idea of node centrality and the optimal division of the central nodes and their neighbor nodes into correct communities, a new algorithm CNM-Centrality of detecting communities in complex networks is proposed. In order to verify the accuracy and efficiency of this algorithm, the performance of this algorithm is tested on several representative real-world networks and a set of computer-generated networks by LFR-benchmark. The experimental results indicate that this algorithm can identify the communities accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity and NMI than the CNM, Infomap, Walktrap algorithms do.

  20. [A Hyperspectral Imagery Anomaly Detection Algorithm Based on Gauss-Markov Model].

    PubMed

    Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo

    2015-10-01

    With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency. PMID:26904830

  1. Multi-pattern string matching algorithms comparison for intrusion detection system

    NASA Astrophysics Data System (ADS)

    Hasan, Awsan A.; Rashid, Nur'Aini Abdul; Abdulrazzaq, Atheer A.

    2014-12-01

    Computer networks are developing exponentially and running at high speeds. With the increasing number of Internet users, computers have become the preferred target for complex attacks that require complex analyses to be detected. The Intrusion detection system (IDS) is created and turned into an important part of any modern network to protect the network from attacks. The IDS relies on string matching algorithms to identify network attacks, but these string matching algorithms consume a considerable amount of IDS processing time, thereby slows down the IDS performance. A new algorithm that can overcome the weakness of the IDS needs to be developed. Improving the multi-pattern matching algorithm ensure that an IDS can work properly and the limitations can be overcome. In this paper, we perform a comparison between our three multi-pattern matching algorithms; MP-KR, MPHQS and MPH-BMH with their corresponding original algorithms Kr, QS and BMH respectively. The experiments show that MPH-QS performs best among the proposed algorithms, followed by MPH-BMH, and MP-KR is the slowest. MPH-QS detects a large number of signature patterns in short time compared to other two algorithms. This finding can prove that the multi-pattern matching algorithms are more efficient in high-speed networks.

  2. Comprehensive evaluation of fusion transcript detection algorithms and a meta-caller to combine top performing methods in paired-end RNA-seq data

    PubMed Central

    Liu, Silvia; Tsai, Wei-Hsiang; Ding, Ying; Chen, Rui; Fang, Zhou; Huo, Zhiguang; Kim, SungHwan; Ma, Tianzhou; Chang, Ting-Yu; Priedigkeit, Nolan Michael; Lee, Adrian V.; Luo, Jianhua; Wang, Hsei-Wei; Chung, I-Fang; Tseng, George C.

    2016-01-01

    Background: Fusion transcripts are formed by either fusion genes (DNA level) or trans-splicing events (RNA level). They have been recognized as a promising tool for diagnosing, subtyping and treating cancers. RNA-seq has become a precise and efficient standard for genome-wide screening of such aberration events. Many fusion transcript detection algorithms have been developed for paired-end RNA-seq data but their performance has not been comprehensively evaluated to guide practitioners. In this paper, we evaluated 15 popular algorithms by their precision and recall trade-off, accuracy of supporting reads and computational cost. We further combine top-performing methods for improved ensemble detection. Results: Fifteen fusion transcript detection tools were compared using three synthetic data sets under different coverage, read length, insert size and background noise, and three real data sets with selected experimental validations. No single method dominantly performed the best but SOAPfuse generally performed well, followed by FusionCatcher and JAFFA. We further demonstrated the potential of a meta-caller algorithm by combining top performing methods to re-prioritize candidate fusion transcripts with high confidence that can be followed by experimental validation. Conclusion: Our result provides insightful recommendations when applying individual tool or combining top performers to identify fusion transcript candidates. PMID:26582927

  3. A novel adaptive, real-time algorithm to detect gait events from wearable sensors.

    PubMed

    Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona

    2015-05-01

    A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices. PMID:25069118

  4. Enhanced Detection of Multivariate Outliers Using Algorithm-Based Visual Display Techniques.

    ERIC Educational Resources Information Center

    Dickinson, Wendy B.

    This study uses an algorithm-based visual display technique (FACES) to provide enhanced detection of multivariate outliers within large-scale data sets. The FACES computer graphing algorithm (H. Chernoff, 1973) constructs a cartoon-like face, using up to 18 variables for each case. A major advantage of FACES is the ability to store and show the…

  5. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1983-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation.

  6. Parallel contact detection algorithm for transient solid dynamics simulations using PRONTO3D

    SciTech Connect

    Attaway, S.W.; Hendrickson, B.A.; Plimpton, S.J.

    1996-09-01

    An efficient, scalable, parallel algorithm for treating material surface contacts in solid mechanics finite element programs has been implemented in a modular way for MIMD parallel computers. The serial contact detection algorithm that was developed previously for the transient dynamics finite element code PRONTO3D has been extended for use in parallel computation by devising a dynamic (adaptive) processor load balancing scheme.

  7. An Automatic Algorithm for Detection of Inclusions in X-ray Images of Agricultural Products

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An automatic recognition algorithm was developed and tested for detection of certain defects or contaminants in x-ray images of agricultural commodities. Testing of the algorithm on x-ray images of wheat kernels infested with larvae of the granary weevil yielded comparable results to those obtained ...

  8. Algorithms for ice halo detection in all-sky images

    NASA Astrophysics Data System (ADS)

    King, Michelle; Greenslit, Morton; Boyd, Sylke

    The effect of cirrus clouds on the radiation budget of the atmosphere depends not only on optical depth and frequency of occurrence, but also on the composition of the clouds. Ice halo phenomena signal the presence of hexagonal crystal habits. Long-term observations on frequency, duration, and type of halo appearances can give ground-based insight into the behavior of cirrus composition. We are capturing images of the entire sky at 30 second intervals using an all-sky camera. We have created a program that analyzes these images for the presence of halos. The algorithm removes the lens distortion, excludes low-level clouds from further analysis, measures the radial RGB color channel intensity, and uses this radial intensity to assess for ice halo presence. We will present our algorithms for image analysis, including removing the lens distortion and low-level clouds, as well as the algorithm to assign a halo probability. We will also present our observation results for the year 2015. Supported by HHMI and UROP.

  9. Influence of aberrations in microholographic recording

    NASA Astrophysics Data System (ADS)

    Katayama, Ryuichi

    2015-11-01

    The influence of various types of aberrations (spherical, coma, and astigmatic) of recording and readout beams on the readout signal in a microholographic recording was investigated through a numerical simulation. The simulation conditions were that the wavelength of the laser was 405 nm and the numerical aperture of the objective lenses was 0.85. The tolerance of the root-mean-square (RMS) wavefront aberrations was defined as the aberration when the normalized signal level decreased to 0.8. Among the three types of aberrations, the influence of the spherical aberration was the most significant. When both the recording and readout beams were aberrated and the signs of the aberrations were in the worst case, the tolerance of the RMS wavefront aberrations was less than half of the Maréchal's criterion. Moreover, when the RMS wavefront aberrations of the recording and readout beams were within the above tolerance, the bit intervals of 0.13 and 0.65 μm in the inplane and vertical directions, respectively, which correspond to the recording density of 91 bit/μm3 (recording capacity of 16 TB for a 120-mm-diameter optical disk having a 300-μm-thick recording layer), were shown to be feasible for confocal detection with an allowable signal-to-noise ratio.

  10. Aberrant reduction of telomere repetitive sequences in plasma cell-free DNA for early breast cancer detection

    PubMed Central

    Wu, Xi; Tanaka, Hiromi

    2015-01-01

    Excessive telomere shortening is observed in breast cancer lesions when compared to adjacent non-cancerous tissues, suggesting that telomere length may represent a key biomarker for early cancer detection. Because tumor-derived, cell-free DNA (cfDNA) is often released from cancer cells and circulates in the bloodstream, we hypothesized that breast cancer development is associated with changes in the amount of telomeric cfDNA that can be detected in the plasma. To test this hypothesis, we devised a novel, highly sensitive and specific quantitative PCR (qPCR) assay, termed telomeric cfDNA qPCR, to quantify plasma telomeric cfDNA levels. Indeed, the internal reference primers of our design correctly reflected input cfDNA amount (R2 = 0.910, P = 7.82 × 10−52), implying accuracy of this assay. We found that plasma telomeric cfDNA levels decreased with age in healthy individuals (n = 42, R2 = 0.094, P = 0.048), suggesting that cfDNA is likely derived from somatic cells in which telomere length shortens with increasing age. Our results also showed a significant decrease in telomeric cfDNA level from breast cancer patients with no prior treatment (n = 47), compared to control individuals (n = 42) (P = 4.06 × 10−8). The sensitivity and specificity for the telomeric cfDNA qPCR assay was 91.49% and 76.19%, respectively. Furthermore, the telomeric cfDNA level distinguished even the Ductal Carcinoma In Situ (DCIS) group (n = 7) from the healthy group (n = 42) (P = 1.51 × 10−3). Taken together, decreasing plasma telomeric cfDNA levels could be an informative genetic biomarker for early breast cancer detection. PMID:26356673

  11. [Tachycardia detection in implantable cardioverter-defibrillators by Sorin/LivaNova : Algorithms, pearls and pitfalls].

    PubMed

    Kolb, Christof; Ocklenburg, Rolf

    2016-09-01

    For physicians involved in the treatment of patients with implantable cardioverter-defibrillators (ICDs) the knowledge of tachycardia detection algorithms is of paramount importance. This knowledge is essential for adequate device selection during de-novo implantation, ICD replacement, and for troubleshooting during follow-up. This review describes tachycardia detection algorithms incorporated in ICDs by Sorin/LivaNova and analyses their strengths and weaknesses. PMID:27605232

  12. Combining genetic algorithm and Levenberg-Marquardt algorithm in training neural network for hypoglycemia detection using EEG signals.

    PubMed

    Nguyen, Lien B; Nguyen, Anh V; Ling, Sai Ho; Nguyen, Hung T

    2013-01-01

    Hypoglycemia is the most common but highly feared complication induced by the intensive insulin therapy in patients with type 1 diabetes mellitus (T1DM). Nocturnal hypoglycemia is dangerous because sleep obscures early symptoms and potentially leads to severe episodes which can cause seizure, coma, or even death. It is shown that the hypoglycemia onset induces early changes in electroencephalography (EEG) signals which can be detected non-invasively. In our research, EEG signals from five T1DM patients during an overnight clamp study were measured and analyzed. By applying a method of feature extraction using Fast Fourier Transform (FFT) and classification using neural networks, we establish that hypoglycemia can be detected efficiently using EEG signals from only two channels. This paper demonstrates that by implementing a training process of combining genetic algorithm and Levenberg-Marquardt algorithm, the classification results are improved markedly up to 75% sensitivity and 60% specificity on a separate testing set. PMID:24110953

  13. AsteroidZoo: A New Zooniverse project to detect asteroids and improve asteroid detection algorithms

    NASA Astrophysics Data System (ADS)

    Beasley, M.; Lewicki, C. A.; Smith, A.; Lintott, C.; Christensen, E.

    2013-12-01

    We present a new citizen science project: AsteroidZoo. A collaboration between Planetary Resources, Inc., the Zooniverse Team, and the Catalina Sky Survey, we will bring the science of asteroid identification to the citizen scientist. Volunteer astronomers have proved to be a critical asset in identification and characterization of asteroids, especially potentially hazardous objects. These contributions, to date, have required that the volunteer possess a moderate telescope and the ability and willingness to be responsive to observing requests. Our new project will use data collected by the Catalina Sky Survey (CSS), currently the most productive asteroid survey, to be used by anyone with sufficient interest and an internet connection. As previous work by the Zooniverse has demonstrated, the capability of the citizen scientist is superb at classification of objects. Even the best automated searches require human intervention to identify new objects. These searches are optimized to reduce false positive rates and to prevent a single operator from being overloaded with requests. With access to the large number of people in Zooniverse, we will be able to avoid that problem and instead work to produce a complete detection list. Each frame from CSS will be searched in detail, generating a large number of new detections. We will be able to evaluate the completeness of the CSS data set and potentially provide improvements to the automated pipeline. The data corpus produced by AsteroidZoo will be used as a training environment for machine learning challenges in the future. Our goals include a more complete asteroid detection algorithm and a minimum computation program that skims the cream of the data suitable for implemention on small spacecraft. Our goal is to have the site become live in the Fall 2013.

  14. Comparison of human observer and algorithmic target detection in nonurban forward-looking infrared imagery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.

    2005-07-01

    We have performed an experiment that compares the performance of human observers with that of a robust algorithm for the detection of targets in difficult, nonurban forward-looking infrared imagery. Our purpose was to benchmark the comparison and document performance differences for future algorithm improvement. The scale-insensitive detection algorithm, used as a benchmark by the Night Vision Electronic Sensors Directorate for algorithm evaluation, employed a combination of contrastlike features to locate targets. Detection receiver operating characteristic curves and observer-confidence analyses were used to compare human and algorithmic responses and to gain insight into differences. The test database contained ground targets, in natural clutter, whose detectability, as judged by human observers, ranged from easy to very difficult. In general, as compared with human observers, the algorithm detected most of the same targets, but correlated confidence with correct detections poorly and produced many more false alarms at any useful level of performance. Though characterizing human performance was not the intent of this study, results suggest that previous observational experience was not a strong predictor of human performance, and that combining individual human observations by majority vote significantly reduced false-alarm rates.

  15. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

    1990-01-01

    Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

  16. An Automated Cloud-edge Detection Algorithm Using Cloud Physics and Radar Data

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.

    2003-01-01

    An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.

  17. A real-time FORTRAN implementation of a sensor failure detection, isolation and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.

    1984-01-01

    An advanced, sensor failure detection, isolation, and accomodation algorithm has been developed by NASA for the F100 turbofan engine. The algorithm takes advantage of the analytical redundancy of the sensors to improve the reliability of the sensor set. The method requires the controls computer, to determine when a sensor failure has occurred without the help of redundant hardware sensors in the control system. The controls computer provides an estimate of the correct value of the output of the failed sensor. The algorithm has been programmed in FORTRAN using a real-time microprocessor-based controls computer. A detailed description of the algorithm and its implementation on a microprocessor is given.

  18. Competitive evaluation of failure detection algorithms for strapdown redundant inertial instruments

    NASA Technical Reports Server (NTRS)

    Wilcox, J. C.

    1973-01-01

    Algorithms for failure detection, isolation, and correction of redundant inertial instruments in the strapdown dodecahedron configuration are competitively evaluated in a digital computer simulation that subjects them to identical environments. Their performance is compared in terms of orientation and inertial velocity errors and in terms of missed and false alarms. The algorithms appear in the simulation program in modular form, so that they may be readily extracted for use elsewhere. The simulation program and its inputs and outputs are described. The algorithms, along with an eight algorithm that was not simulated, also compared analytically to show the relationships among them.

  19. A low-power fall detection algorithm based on triaxial acceleration and barometric pressure.

    PubMed

    Wang, Changhong; Narayanan, Michael R; Lord, Stephen R; Redmond, Stephen J; Lovell, Nigel H

    2014-01-01

    This paper proposes a low-power fall detection algorithm based on triaxial accelerometry and barometric pressure signals. The algorithm dynamically adjusts the sampling rate of an accelerometer and manages data transmission between sensors and a controller to reduce power consumption. The results of simulation show that the sensitivity and specificity of the proposed fall detection algorithm are both above 96% when applied to a previously collected dataset comprising 20 young actors performing a combination of simulated falls and activities of daily living. This level of performance can be achieved despite a 10.9% reduction in power consumption. PMID:25570023

  20. RS slope detection algorithm for extraction of heart rate from noisy, multimodal recordings.

    PubMed

    Gierałtowski, Jan; Ciuchciński, Kamil; Grzegorczyk, Iga; Kośna, Katarzyna; Soliński, Mateusz; Podziemski, Piotr

    2015-08-01

    Current gold-standard algorithms for heart beat detection do not work properly in the case of high noise levels and do not make use of multichannel data collected by modern patient monitors. The main idea behind the method presented in this paper is to detect the most prominent part of the QRS complex, i.e. the RS slope. We localize the RS slope based on the consistency of its characteristics, i.e. adequate, automatically determined amplitude and duration. It is a very simple and non-standard, yet very effective, solution. Minor data pre-processing and parameter adaptations make our algorithm fast and noise-resistant. As one of a few algorithms in the PhysioNet/Computing in Cardiology Challenge 2014, our algorithm uses more than two channels (i.e. ECG, BP, EEG, EOG and EMG). Simple fundamental working rules make the algorithm universal: it is able to work on all of these channels with no or only little changes. The final result of our algorithm in phase III of the Challenge was 86.38 (88.07 for a 200 record test set), which gave us fourth place. Our algorithm shows that current standards for heart beat detection could be improved significantly by taking a multichannel approach. This is an open-source algorithm available through the PhysioNet library. PMID:26218763

  1. A group filter algorithm for sea mine detection

    NASA Astrophysics Data System (ADS)

    Cobb, J. Tory; An, Myoung; Tolimieri, Richard

    2005-06-01

    Automatic detection of sea mines in coastal regions is a difficult task due to the highly variable sea bottom conditions present in the underwater environment. Detection systems must be able to discriminate objects which vary in size, shape, and orientation from naturally occurring and man-made clutter. Additionally, these automated systems must be computationally efficient to be incorporated into unmanned underwater vehicle (UUV) sensor systems characterized by high sensor data rates and limited processing abilities. Using noncommutative group harmonic analysis, a fast, robust sea mine detection system is created. A family of unitary image transforms associated to noncommutative groups is generated and applied to side scan sonar image files supplied by Naval Surface Warfare Center Panama City (NSWC PC). These transforms project key image features, geometrically defined structures with orientations, and localized spectral information into distinct orthogonal components or feature subspaces of the image. The performance of the detection system is compared against the performance of an independent detection system in terms of probability of detection (Pd) and probability of false alarm (Pfa).

  2. Planet Detection Algorithms for the Terrestrial Planet Finder-C

    NASA Astrophysics Data System (ADS)

    Kasdin, N. J.; Braems, I.

    2005-12-01

    Critical to mission planning for the terrestrial planet finder coronagraph (TPF-C) is the ability to estimate integration times for planet detection. This detection is complicated by the presence of background noise due to local and exo-zodiacal dust, by residual speckle due optical errors, and by the dependence of the PSF shape on the specific coronagraph. In this paper we examine in detail the use of PSF fitting (matched filtering) for planet detection, derive probabilistic bounds for the signal-to-noise ratio by balancing missed detection and false alarm rates, and demonstrate that this is close to the optimal linear detection technique. We then compare to a Bayesian detection approach and show that for very low background the Bayesian method offers integration time improvements, but rapidly approaches the PSF fitting result for reasonable levels of background noise. We confirm via monte-carlo simulations. This work was supported under a grant from the Jet Propulsion Laboratory and by a fellowship from the Institut National de Recherche en Informatique et Automatique (INRIA).

  3. Evaluation of detection algorithms for perpendicular recording channels with intertrack interference

    NASA Astrophysics Data System (ADS)

    Tan, Weijun; Cruz, J. R.

    2005-02-01

    Channel detection algorithms for handling intertrack interference (ITI) in perpendicular magnetic recording channels are studied in this paper. The goal is to optimize channel detection to attain the best possible performance for a practical system. Two channel detection models, namely, the single-track model and the joint-track model are evaluated using information rate analysis as well as simulations. Numerical results show that joint-track detection may be needed when ITI is severe.

  4. Combined Dust Detection Algorithm by Using MODIS Infrared Channels over East Asia

    NASA Technical Reports Server (NTRS)

    Park, Sang Seo; Kim, Jhoon; Lee, Jaehwa; Lee, Sukjo; Kim, Jeong Soo; Chang, Lim Seok; Ou, Steve

    2014-01-01

    A new dust detection algorithm is developed by combining the results of multiple dust detectionmethods using IR channels onboard the MODerate resolution Imaging Spectroradiometer (MODIS). Brightness Temperature Difference (BTD) between two wavelength channels has been used widely in previous dust detection methods. However, BTDmethods have limitations in identifying the offset values of the BTDto discriminate clear-sky areas. The current algorithm overcomes the disadvantages of previous dust detection methods by considering the Brightness Temperature Ratio (BTR) values of the dual wavelength channels with 30-day composite, the optical properties of the dust particles, the variability of surface properties, and the cloud contamination. Therefore, the current algorithm shows improvements in detecting the dust loaded region over land during daytime. Finally, the confidence index of the current dust algorithm is shown in 10 × 10 pixels of the MODIS observations. From January to June, 2006, the results of the current algorithm are within 64 to 81% of those found using the fine mode fraction (FMF) and aerosol index (AI) from the MODIS and Ozone Monitoring Instrument (OMI). The agreement between the results of the current algorithm and the OMI AI over the non-polluted land also ranges from 60 to 67% to avoid errors due to the anthropogenic aerosol. In addition, the developed algorithm shows statistically significant results at four AErosol RObotic NETwork (AERONET) sites in East Asia.

  5. Identifying time measurement tampering in the traversal time and hop count analysis (TTHCA) wormhole detection algorithm.

    PubMed

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2013-01-01

    Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm. PMID:23686143

  6. Identifying Time Measurement Tampering in the Traversal Time and Hop Count Analysis (TTHCA) Wormhole Detection Algorithm

    PubMed Central

    Karlsson, Jonny; Dooley, Laurence S.; Pulkkis, Göran

    2013-01-01

    Traversal time and hop count analysis (TTHCA) is a recent wormhole detection algorithm for mobile ad hoc networks (MANET) which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM) wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ΔT Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ΔT Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm. PMID:23686143

  7. Feature optimization in chemometric algorithms for explosives detection

    NASA Astrophysics Data System (ADS)

    Pinkham, Daniel W.; Bonick, James R.; Woodka, Marc D.

    2012-06-01

    This paper details the use of a genetic algorithm (GA) as a method to preselect spectral feature variables for chemometric algorithms, using spectroscopic data gathered on explosive threat targets. The GA was applied to laserinduced breakdown spectroscopy (LIBS) and ultraviolet Raman spectroscopy (UVRS) data, in which the spectra consisted of approximately 10000 and 1000 distinct spectral values, respectively. The GA-selected variables were examined using two chemometric techniques: multi-class linear discriminant analysis (LDA) and support vector machines (SVM), and the performance from LDA and SVM was fed back to the GA through a fitness function evaluation. In each case, an optimal selection of features was achieved within 20 generations of the GA, with few improvements thereafter. The GA selected chemically significant signatures, such as oxygen and hydron peaks from LIBS spectra and characteristic Raman shifts for AN, TNT, and PETN. Successes documented herein suggest that this GA approach could be useful in analyzing spectroscopic data in complex environments, where the discriminating features of desired targets are not yet fully understood.

  8. A small dim infrared maritime target detection algorithm based on local peak detection and pipeline-filtering

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Dong, Lili; Zhao, Ming; Xu, Wenhai

    2015-12-01

    In order to realize accurate detection for small dim infrared maritime target, this paper proposes a target detection algorithm based on local peak detection and pipeline-filtering. This method firstly extracts some suspected targets through local peak detection and removes most of non-target peaks with self-adaptive threshold process. And then pipeline-filtering is used to eliminate residual interferences so that only real target can be retained. The experiment results prove that this method has high performance on target detection, and its missing alarm rate and false alarm rate can basically meet practical requirements.

  9. In-depth performance analysis of an EEG based neonatal seizure detection algorithm

    PubMed Central

    Mathieson, S.; Rennie, J.; Livingstone, V.; Temko, A.; Low, E.; Pressler, R.M.; Boylan, G.B.

    2016-01-01

    Objective To describe a novel neurophysiology based performance analysis of automated seizure detection algorithms for neonatal EEG to characterize features of detected and non-detected seizures and causes of false detections to identify areas for algorithmic improvement. Methods EEGs of 20 term neonates were recorded (10 seizure, 10 non-seizure). Seizures were annotated by an expert and characterized using a novel set of 10 criteria. ANSeR seizure detection algorithm (SDA) seizure annotations were compared to the expert to derive detected and non-detected seizures at three SDA sensitivity thresholds. Differences in seizure characteristics between groups were compared using univariate and multivariate analysis. False detections were characterized. Results The expert detected 421 seizures. The SDA at thresholds 0.4, 0.5, 0.6 detected 60%, 54% and 45% of seizures. At all thresholds, multivariate analyses demonstrated that the odds of detecting seizure increased with 4 criteria: seizure amplitude, duration, rhythmicity and number of EEG channels involved at seizure peak. Major causes of false detections included respiration and sweat artefacts or a highly rhythmic background, often during intermediate sleep. Conclusion This rigorous analysis allows estimation of how key seizure features are exploited by SDAs. Significance This study resulted in a beta version of ANSeR with significantly improved performance. PMID:27072097

  10. Detection of aberrant methylation of a six-gene panel in serum DNA for diagnosis of breast cancer

    PubMed Central

    Li, Junnan; Li, Xiaobo; Wang, Dong; Su, Yonghui; Niu, Ming; Zhong, Zhenbin; Wang, Ji; Zhang, Xianyu; Kang, Wenli; Pang, Da

    2016-01-01

    Detection of breast cancer at an early stage is the key for successful treatment and improvement of outcome. However the limitations of mammography are well recognized, especially for those women with premenopausal breast cancer. Novel approaches to breast cancer screening are necessary, especially in the developing world where mammography is not feasible. In this study, we examined the promoter methylation of six genes (SFN, P16, hMLH1, HOXD13, PCDHGB7 and RASSF1a) in circulating free DNA (cfDNA) extracted from serum. We used a high-throughput DNA methylation assay (MethyLight) to examine serum from 749 cases including breast cancer patients, patients with benign breast diseases and healthy women. The six-gene methylation panel test achieved 79.6% and 82.4% sensitivity with a specificity of 72.4% and 78.1% in diagnosis of breast cancer when compared with healthy and benign disease controls, respectively. Moreover, the methylation panel positive group showed significant differences in the following independent variables: (a) involvement of family history of tumors; (b) a low proliferative index, ki-67; (c) high ratios in luminal subtypes. Additionally the panel also complemented some breast cancer cases which were neglected by mammography or ultrasound. These data suggest that epigenetic markers in serum have potential for diagnosis of breast cancer. PMID:26918343

  11. Detection of aberrant methylation of a six-gene panel in serum DNA for diagnosis of breast cancer.

    PubMed

    Shan, Ming; Yin, Huizi; Li, Junnan; Li, Xiaobo; Wang, Dong; Su, Yonghui; Niu, Ming; Zhong, Zhenbin; Wang, Ji; Zhang, Xianyu; Kang, Wenli; Pang, Da

    2016-04-01

    Detection of breast cancer at an early stage is the key for successful treatment and improvement of outcome. However the limitations of mammography are well recognized, especially for those women with premenopausal breast cancer. Novel approaches to breast cancer screening are necessary, especially in the developing world where mammography is not feasible. In this study, we examined the promoter methylation of six genes (SFN, P16, hMLH1, HOXD13, PCDHGB7 and RASSF1a) in circulating free DNA (cfDNA) extracted from serum. We used a high-throughput DNA methylation assay (MethyLight) to examine serum from 749 cases including breast cancer patients, patients with benign breast diseases and healthy women. The six-gene methylation panel test achieved 79.6% and 82.4% sensitivity with a specificity of 72.4% and 78.1% in diagnosis of breast cancer when compared with healthy and benign disease controls, respectively. Moreover, the methylation panel positive group showed significant differences in the following independent variables: (a) involvement of family history of tumors; (b) a low proliferative index, ki-67; (c) high ratios in luminal subtypes. Additionally the panel also complemented some breast cancer cases which were neglected by mammography or ultrasound. These data suggest that epigenetic markers in serum have potential for diagnosis of breast cancer. PMID:26918343

  12. An algorithm for image clusters detection and identification based on color for an autonomous mobile robot

    SciTech Connect

    Uy, D.L.

    1996-02-01

    An algorithm for detection and identification of image clusters or {open_quotes}blobs{close_quotes} based on color information for an autonomous mobile robot is developed. The input image data are first processed using a crisp color fuszzyfier, a binary smoothing filter, and a median filter. The processed image data is then inputed to the image clusters detection and identification program. The program employed the concept of {open_quotes}elastic rectangle{close_quotes}that stretches in such a way that the whole blob is finally enclosed in a rectangle. A C-program is develop to test the algorithm. The algorithm is tested only on image data of 8x8 sizes with different number of blobs in them. The algorithm works very in detecting and identifying image clusters.

  13. Multiobjective biogeography based optimization algorithm with decomposition for community detection in dynamic networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Yanheng; Li, Bin; Sun, Geng

    2015-10-01

    Identifying community structures in static network misses the opportunity to capture the evolutionary patterns. So community detection in dynamic network has attracted many researchers. In this paper, a multiobjective biogeography based optimization algorithm with decomposition (MBBOD) is proposed to solve community detection problem in dynamic networks. In the proposed algorithm, the decomposition mechanism is adopted to optimize two evaluation objectives named modularity and normalized mutual information simultaneously, which measure the quality of the community partitions and temporal cost respectively. A novel sorting strategy for multiobjective biogeography based optimization is presented for comparing quality of habitats to get species counts. In addition, problem-specific migration and mutation model are introduced to improve the effectiveness of the new algorithm. Experimental results both on synthetic and real networks demonstrate that our algorithm is effective and promising, and it can detect communities more accurately in dynamic networks compared with DYNMOGA and FaceNet.

  14. A Region Tracking-Based Vehicle Detection Algorithm in Nighttime Traffic Scenes

    PubMed Central

    Wang, Jianqiang; Sun, Xiaoyan; Guo, Junbin

    2013-01-01

    The preceding vehicles detection technique in nighttime traffic scenes is an important part of the advanced driver assistance system (ADAS). This paper proposes a region tracking-based vehicle detection algorithm via the image processing technique. First, the brightness of the taillights during nighttime is used as the typical feature, and we use the existing global detection algorithm to detect and pair the taillights. When the vehicle is detected, a time series analysis model is introduced to predict vehicle positions and the possible region (PR) of the vehicle in the next frame. Then, the vehicle is only detected in the PR. This could reduce the detection time and avoid the false pairing between the bright spots in the PR and the bright spots out of the PR. Additionally, we present a thresholds updating method to make the thresholds adaptive. Finally, experimental studies are provided to demonstrate the application and substantiate the superiority of the proposed algorithm. The results show that the proposed algorithm can simultaneously reduce both the false negative detection rate and the false positive detection rate.

  15. A novel algorithm for detection of precipitation in tropical regions using PMW radiometers

    NASA Astrophysics Data System (ADS)

    Casella, D.; Panegrossi, G.; Sanò, P.; Milani, L.; Petracca, M.; Dietrich, S.

    2015-03-01

    A novel algorithm for the detection of precipitation is described and tested. The algorithm is applicable to any modern passive microwave radiometer on board polar orbiting satellites independent of the observation geometry and channel frequency assortment. The algorithm is based on the application of canonical correlation analysis and on the definition of a threshold to be applied to the resulting linear combination of the brightness temperatures in all available channels. The algorithm has been developed using a 2-year data set of co-located Special Sensor Microwave Imager/Sounder (SSMIS) and Tropical Rainfall Measuring Mission precipitation radar (TRMM-PR) measurements and Advanced Microwave Sounding Unit (AMSU) Microwave Humidity Sounder and TRMM-PR measurements. This data set was partitioned into four classes depending on the background surface emissivity (vegetated land, arid land, ocean, and coast) with the same procedure applied for each surface class. In this paper we describe the procedure and evaluate the results in comparison with many well-known algorithms for the detection of precipitation. The algorithm shows a small rate of false alarms and superior detection capability; it can efficiently detect (probability of detection between 0.55 and 0.71) minimum rain rate varying from 0.14 mm h-1 (AMSU over ocean) to 0.41 (SSMIS over coast) with the remarkable result of 0.25 mm h-1 over arid land surfaces.

  16. Robust pupil center detection using a curvature algorithm

    NASA Technical Reports Server (NTRS)

    Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)

    1999-01-01

    Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.

  17. Polarization Lidar Liquid Cloud Detection Algorithm for Winter Mountain Storms

    NASA Technical Reports Server (NTRS)

    Sassen, Kenneth; Zhao, Hongjie

    1992-01-01

    We have collected an extensive polarization lidar dataset from elevated sites in the Tushar Mountains of Utah in support of winter storm cloud seeding research and experiments. Our truck-mounted ruby lidar collected zenith, dual-polarization lidar data through a roof window equipped with a wiper system to prevent snowfall accumulation. Lidar returns were collected at a rate of one shot every 1 to 5 min during declared storm periods over the 1985 and 1987 mid-Jan. to mid-Mar. Field seasons. The mid-barrier remote sensor field site was located at 2.57 km MSL. Of chief interest to weather modification efforts are the heights of supercooled liquid water (SLW) clouds, which must be known to assess their 'seedability' (i.e., temperature and height suitability for artificially increasing snowfall). We are currently re-examining out entire dataset to determine the climatological properties of SLW clouds in winter storms using an autonomous computer algorithm.

  18. An unsupervised learning algorithm for fatigue crack detection in waveguides

    NASA Astrophysics Data System (ADS)

    Rizzo, Piervincenzo; Cammarata, Marcello; Dutta, Debaditya; Sohn, Hoon; Harries, Kent

    2009-02-01

    Ultrasonic guided waves (UGWs) are a useful tool in structural health monitoring (SHM) applications that can benefit from built-in transduction, moderately large inspection ranges, and high sensitivity to small flaws. This paper describes an SHM method based on UGWs and outlier analysis devoted to the detection and quantification of fatigue cracks in structural waveguides. The method combines the advantages of UGWs with the outcomes of the discrete wavelet transform (DWT) to extract defect-sensitive features aimed at performing a multivariate diagnosis of damage. In particular, the DWT is exploited to generate a set of relevant wavelet coefficients to construct a uni-dimensional or multi-dimensional damage index vector. The vector is fed to an outlier analysis to detect anomalous structural states. The general framework presented in this paper is applied to the detection of fatigue cracks in a steel beam. The probing hardware consists of a National Instruments PXI platform that controls the generation and detection of the ultrasonic signals by means of piezoelectric transducers made of lead zirconate titanate. The effectiveness of the proposed approach to diagnose the presence of defects as small as a few per cent of the waveguide cross-sectional area is demonstrated.

  19. Improved egg crack detection algorithm for modified pressure imaging system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Shell eggs with microcracks are often undetected during egg grading processes. In the past, a modified pressure imaging system was developed to detect eggs with microcracks without adversely affecting the quality of normal intact eggs. The basic idea of the modified pressure imaging system was to ap...

  20. A novel algorithm for automatic arrays detection in a layout

    NASA Astrophysics Data System (ADS)

    Shafee, Marwah; Park, Jea-Woo; Aslyan, Ara; Torres, Andres; Madkour, Kareem; ElManhawy, Wael

    2013-03-01

    Integrated circuits suffer from serious layout printability issues associated to the lithography manufacturing process. Regular layout designs are emerging as alternative solutions to help reducing these systematic sub-wavelength lithography variations. From CAD point of view, regular layouts can be treated as repeated patterns that are arranged in arrays. In most modern mask synthesis and verification tools, cell based hierarchical processing has been able to identify repeating cells by analyzing the design's cell placement; however, there are some routing levels which are not inside the cell and yet they create an array-like structure because of the underlying topologies which could be exploited by detecting repeated patterns in layout thus reducing simulation run-time by simulating only the representing cells and then restore all the simulation results in their corresponding arrays. The challenge is to make the array detection and restoration of the results a very lightweight operation to fully realize the benefits of the approach. A novel methodology for detecting repeated patterns in a layout is proposed. The main idea is based on translating the layout patterns into string of symbols and construct a "Symbolic Layout". By finding repetitions in the symbolic layout, repeated patterns in the drawn layout are detected. A flow for layout reduction based on arrays-detection followed by pattern-matching is discussed. Run time saving comes from doing all litho simulations on the base-patterns only. The pattern matching is then used to restore all the simulation results over the arrays. The proposed flow shows 1.4x to 2x run time enhancement over the regular litho simulation flow. An evaluation for the proposed flow in terms of coverage and run-time is drafted.

  1. A Contextual Fire Detection Algorithm for Simulated HJ-1B Imagery

    PubMed Central

    Qian, Yonggang; Yan, Guangjian; Duan, Sibo; Kong, Xiangsheng

    2009-01-01

    The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m2 and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m2, only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m2, the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection. PMID:22399950

  2. Development of an unbiased cloud detection algorithm for a spaceborne multispectral imager

    NASA Astrophysics Data System (ADS)

    Ishida, Haruma; Nakajima, Takashi Y.

    2009-04-01

    A new concept for cloud detection from observations by multispectral spaceborne imagers is proposed, and an algorithm comprising many pixel-by-pixel threshold tests is developed. Since in nature the thickness of clouds tends to vary continuously and the border between cloud and clear sky is thus vague, it is unrealistic to label pixels as either cloudy or clear sky. Instead, the extraction of ambiguous areas is considered to be useful and informative. We refer to the multiple threshold method employed in the MOD35 algorithm that is used for Moderate Resolution Imaging Spectroradiometer (MODIS) standard data analysis, but drastically reconstruct the structure of the algorithm to meet our aim of sustaining the neutral position. The concept of a clear confidence level, which represents certainty of the clear or cloud condition, is applied to design a neutral cloud detection algorithm that is not biased to either clear or cloudy. The use of the clear confidence level with neutral position also makes our algorithm structure very simple. Several examples of cloud detection from satellite data are tested using our algorithm and are validated by visual inspection and comparison to previous cloud mask data. The results indicate that our algorithm is capable of reasonable discrimination between cloudy and clear-sky areas over ocean with and without Sun glint, forest, and desert, and is able to extract areas with ambiguous cloudiness condition.

  3. Stride Search: a general algorithm for storm detection in high-resolution climate data

    NASA Astrophysics Data System (ADS)

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; Mundt, Miranda R.

    2016-04-01

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.

  4. Intrusion-aware alert validation algorithm for cooperative distributed intrusion detection schemes of wireless sensor networks.

    PubMed

    Shaikh, Riaz Ahmed; Jameel, Hassan; d'Auriol, Brian J; Lee, Heejo; Lee, Sungyoung; Song, Young-Jae

    2009-01-01

    Existing anomaly and intrusion detection schemes of wireless sensor networks have mainly focused on the detection of intrusions. Once the intrusion is detected, an alerts or claims will be generated. However, any unidentified malicious nodes in the network could send faulty anomaly and intrusion claims about the legitimate nodes to the other nodes. Verifying the validity of such claims is a critical and challenging issue that is not considered in the existing cooperative-based distributed anomaly and intrusion detection schemes of wireless sensor networks. In this paper, we propose a validation algorithm that addresses this problem. This algorithm utilizes the concept of intrusion-aware reliability that helps to provide adequate reliability at a modest communication cost. In this paper, we also provide a security resiliency analysis of the proposed intrusion-aware alert validation algorithm. PMID:22454568

  5. Edge detection based on genetic algorithm and sobel operator in image

    NASA Astrophysics Data System (ADS)

    Tong, Xin; Ren, Aifeng; Zhang, Haifeng; Ruan, Hang; Luo, Ming

    2011-10-01

    Genetic algorithm (GA) is widely used as the optimization problems using techniques inspired by natural evolution. In this paper we present a new edge detection technique based on GA and sobel operator. The sobel edge detection built in DSP Builder is first used to determine the boundaries of objects within an image. Then the genetic algorithm using SOPC Builder proposes a new threshold algorithm for the image processing. Finally, the performance of the new edge detection technique-based the best threshold approaches in DSP Builder and Quartus II software is compared both qualitatively and quantitatively with the single sobel operator. The new edge detection technique is shown to perform very well in terms of robustness to noise, edge search capability and quality of the final edge image.

  6. Performance Assessment Method for a Forged Fingerprint Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Shin, Yong Nyuo; Jun, In-Kyung; Kim, Hyun; Shin, Woochang

    The threat of invasion of privacy and of the illegal appropriation of information both increase with the expansion of the biometrics service environment to open systems. However, while certificates or smart cards can easily be cancelled and reissued if found to be missing, there is no way to recover the unique biometric information of an individual following a security breach. With the recognition that this threat factor may disrupt the large-scale civil service operations approaching implementation, such as electronic ID cards and e-Government systems, many agencies and vendors around the world continue to develop forged fingerprint detection technology, but no objective performance assessment method has, to date, been reported. Therefore, in this paper, we propose a methodology designed to evaluate the objective performance of the forged fingerprint detection technology that is currently attracting a great deal of attention.

  7. Fall detection algorithm in energy efficient multistate sensor system.

    PubMed

    Korats, Gundars; Hofmanis, Janis; Skorodumovs, Aleksejs; Avots, Egils

    2015-01-01

    Health issues for elderly people may lead to different injuries obtained during simple activities of daily living (ADL). Potentially the most dangerous are unintentional falls that may be critical or even lethal to some patients due to the heavy injury risk. Many fall detection systems are proposed but only recently such health care systems became available. Nevertheless sensor design, accuracy as well as energy consumption efficiency can be improved. In this paper we present a single 3-axial accelerometer energy-efficient sensor system. Power saving is achieved by selective event processing triggered by fall detection procedure. The results in our simulations show 100% accuracy when the threshold parameters are chosen correctly. Estimated energy consumption seems to extend battery life significantly. PMID:26737408

  8. A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data

    NASA Astrophysics Data System (ADS)

    Gregoire, John M.; Dale, Darren; van Dover, R. Bruce

    2011-01-01

    Peak detection is ubiquitous in the analysis of spectral data. While many noise-filtering algorithms and peak identification algorithms have been developed, recent work [P. Du, W. Kibbe, and S. Lin, Bioinformatics 22, 2059 (2006); A. Wee, D. Grayden, Y. Zhu, K. Petkovic-Duran, and D. Smith, Electrophoresis 29, 4215 (2008)] has demonstrated that both of these tasks are efficiently performed through analysis of the wavelet transform of the data. In this paper, we present a wavelet-based peak detection algorithm with user-defined parameters that can be readily applied to the application of any spectral data. Particular attention is given to the algorithm's resolution of overlapping peaks. The algorithm is implemented for the analysis of powder diffraction data, and successful detection of Bragg peaks is demonstrated for both low signal-to-noise data from theta-theta diffraction of nanoparticles and combinatorial x-ray diffraction data from a composition spread thin film. These datasets have different types of background signals which are effectively removed in the wavelet-based method, and the results demonstrate that the algorithm provides a robust method for automated peak detection.

  9. Development of a fire detection algorithm for the COMS (Communication Ocean and Meteorological Satellite)

    NASA Astrophysics Data System (ADS)

    Kim, Goo; Kim, Dae Sun; Lee, Yang-Won

    2013-10-01

    The forest fires do much damage to our life in ecological and economic aspects. South Korea is probably more liable to suffer from the forest fire because mountain area occupies more than half of land in South Korea. They have recently launched the COMS(Communication Ocean and Meteorological Satellite) which is a geostationary satellite. In this paper, we developed forest fire detection algorithm using COMS data. Generally, forest fire detection algorithm uses characteristics of 4 and 11 micrometer brightness temperature. Our algorithm additionally uses LST(Land Surface Temperature). We confirmed the result of our fire detection algorithm using statistical data of Korea Forest Service and ASTER(Advanced Spaceborne Thermal Emission and Reflection Radiometer) images. We used the data in South Korea On April 1 and 2, 2011 because there are small and big forest fires at that time. The detection rate was 80% in terms of the frequency of the forest fires and was 99% in terms of the damaged area. Considering the number of COMS's channels and its low resolution, this result is a remarkable outcome. To provide users with the result of our algorithm, we developed a smartphone application for users JSP(Java Server Page). This application can work regardless of the smartphone's operating system. This study can be unsuitable for other areas and days because we used just two days data. To improve the accuracy of our algorithm, we need analysis using long-term data as future work.

  10. An efficient contextual algorithm to detect subsurface fires with NOAA/AVHRR data

    SciTech Connect

    Gautam, R.S.; Singh, D.; Mittal, A.

    2008-07-15

    This paper deals with the potential application of National Oceanic and Atmospheric Administration (NOAA)/Advanced Very High Resolution Radiometer (AVHRR) data to detect subsurface fire (subsurface hotspots) by proposing an efficient contextual algorithm. Although few algorithms based on the fixed-thresholding approach have been proposed for subsurface hotspot detection, however, for each application, thresholds have to be specifically tuned to cope with unique environmental conditions. The main objective of this paper is to develop an instrument-independent adaptive method by which direct threshold or multithreshold can be avoided. The proposed contextual algorithm is helpful to monitor subsurface hotspots with operational satellite data, such as the Jharia region of India, without making any region-specific guess in thresholding. Novelty of the proposed work lies in the fact that once the algorithmic model is developed for the particular region of interest after optimizing the model parameters, there is no need to optimize those parameters again for further satellite images. Hence, the developed model can be used for optimized automated detection and monitoring of subsurface hotspots for future images of the particular region of interest. The algorithm is adaptive in nature and uses vegetation index and different NOAA/AVHRR channel's statistics to detect hotspots in the region of interest. The performance of the algorithm is assessed in terms of sensitivity and specificity and compared with other well-known thresholding, techniques such as Otsu's thresholding, entropy-based thresholding, and existing contextual algorithm proposed by Flasse and Ceccato. The proposed algorithm is found to give better hotspot detection accuracy with lesser false alarm rate.

  11. Clinical implementation of a neonatal seizure detection algorithm

    PubMed Central

    Temko, Andriy; Marnane, William; Boylan, Geraldine; Lightbody, Gordon

    2015-01-01

    Technologies for automated detection of neonatal seizures are gradually moving towards cot-side implementation. The aim of this paper is to present different ways to visualize the output of a neonatal seizure detection system and analyse their influence on performance in a clinical environment. Three different ways to visualize the detector output are considered: a binary output, a probabilistic trace, and a spatio-temporal colormap of seizure observability. As an alternative to visual aids, audified neonatal EEG is also considered. Additionally, a survey on the usefulness and accuracy of the presented methods has been performed among clinical personnel. The main advantages and disadvantages of the presented methods are discussed. The connection between information visualization and different methods to compute conventional metrics is established. The results of the visualization methods along with the system validation results indicate that the developed neonatal seizure detector with its current level of performance would unambiguously be of benefit to clinicians as a decision support system. The results of the survey suggest that a suitable way to visualize the output of neonatal seizure detection systems in a clinical environment is a combination of a binary output and a probabilistic trace. The main healthcare benefits of the tool are outlined. The decision support system with the chosen visualization interface is currently undergoing pre-market European multi-centre clinical investigation to support its regulatory approval and clinical adoption. PMID:25892834

  12. Algorithms for computer detection of symmetry elements in molecular systems.

    PubMed

    Beruski, Otávio; Vidal, Luciano N

    2014-02-01

    Simple procedures for the location of proper and improper rotations and reflexion planes are presented. The search is performed with a molecule divided into subsets of symmetrically equivalent atoms (SEA) which are analyzed separately as if they were a single molecule. This approach is advantageous in many aspects. For instance, in those molecules that are symmetric rotors, the number of atoms and the inertia tensor of the SEA provide one straight way to find proper rotations of any order. The algorithms are invariant to the molecular orientation and their computational cost is low, because the main information required to find symmetry elements is interatomic distances and the principal moments of the SEA. For example, our Fortran implementation, running on a single processor, took only a few seconds to locate all 120 symmetry operations of the large and highly symmetrical fullerene C720, belonging to the Ih point group. Finally, we show how the interatomic distances matrix of a slightly unsymmetrical molecule is used to symmetrize its geometry. PMID:24403016

  13. An Optional Threshold with Svm Cloud Detection Algorithm and Dsp Implementation

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Zhou, Xiang; Yue, Tao; Liu, Yilong

    2016-06-01

    This paper presents a method which combines the traditional threshold method and SVM method, to detect the cloud of Landsat-8 images. The proposed method is implemented using DSP for real-time cloud detection. The DSP platform connects with emulator and personal computer. The threshold method is firstly utilized to obtain a coarse cloud detection result, and then the SVM classifier is used to obtain high accuracy of cloud detection. More than 200 cloudy images from Lansat-8 were experimented to test the proposed method. Comparing the proposed method with SVM method, it is demonstrated that the cloud detection accuracy of each image using the proposed algorithm is higher than those of SVM algorithm. The results of the experiment demonstrate that the implementation of the proposed method on DSP can effectively realize the real-time cloud detection accurately.

  14. Evaluation of two-color missile detection algorithms against real backgrounds

    NASA Astrophysics Data System (ADS)

    Baxley, Frank O.; Sanderson, Richard B.; Montgomery, Joel B.; McCalmont, John F.

    2000-07-01

    Missile warning is one of the most significant problems facing aircraft flying into regions of unrest around the world. Recent advances in technology provide new avenues for detecting these threats and have permitted the use of imaging detectors and multi-color systems. Detecting threats while maintaining a low false alarm rate is the most demanding challenge facing these systems. Using data from ARFL's Spectral Infrared Detection System (SIRDS) test bed, the efficacy of alternative spectral threat detection algorithms developed around these technologies are evaluated and compared. The data used to evaluate the algorithms cover a range of clutter conditions including urban, industrial, maritime and rural. Background image data were corrected for non-uniformity and filtered to enhance threat to clutter response. The corrected data were further processed and analyzed statistically to determine probability of detection thresholds and the corresponding probability of false alarm. The results are summarized for three algorithms including simple threshold detection, background normalized analysis, and an inter-band correlation detection algorithm.

  15. An algorithm for power line detection and warning based on a millimeter-wave radar video.

    PubMed

    Ma, Qirong; Goshi, Darren S; Shih, Yi-Chi; Sun, Ming-Ting

    2011-12-01

    Power-line-strike accident is a major safety threat for low-flying aircrafts such as helicopters, thus an automatic warning system to power lines is highly desirable. In this paper we propose an algorithm for detecting power lines from radar videos from an active millimeter-wave sensor. Hough Transform is employed to detect candidate lines. The major challenge is that the radar videos are very noisy due to ground return. The noise points could fall on the same line which results in signal peaks after Hough Transform similar to the actual cable lines. To differentiate the cable lines from the noise lines, we train a Support Vector Machine to perform the classification. We exploit the Bragg pattern, which is due to the diffraction of electromagnetic wave on the periodic surface of power lines. We propose a set of features to represent the Bragg pattern for the classifier. We also propose a slice-processing algorithm which supports parallel processing, and improves the detection of cables in a cluttered background. Lastly, an adaptive algorithm is proposed to integrate the detection results from individual frames into a reliable video detection decision, in which temporal correlation of the cable pattern across frames is used to make the detection more robust. Extensive experiments with real-world data validated the effectiveness of our cable detection algorithm. PMID:21652287

  16. Deep learning algorithms for detecting explosive hazards in ground penetrating radar data

    NASA Astrophysics Data System (ADS)

    Besaw, Lance E.; Stimac, Philip J.

    2014-05-01

    Buried explosive hazards (BEHs) have been, and continue to be, one of the most deadly threats in modern conflicts. Current handheld sensors rely on a highly trained operator for them to be effective in detecting BEHs. New algorithms are needed to reduce the burden on the operator and improve the performance of handheld BEH detectors. Traditional anomaly detection and discrimination algorithms use "hand-engineered" feature extraction techniques to characterize and classify threats. In this work we use a Deep Belief Network (DBN) to transcend the traditional approaches of BEH detection (e.g., principal component analysis and real-time novelty detection techniques). DBNs are pretrained using an unsupervised learning algorithm to generate compressed representations of unlabeled input data and form feature detectors. They are then fine-tuned using a supervised learning algorithm to form a predictive model. Using ground penetrating radar (GPR) data collected by a robotic cart swinging a handheld detector, our research demonstrates that relatively small DBNs can learn to model GPR background signals and detect BEHs with an acceptable false alarm rate (FAR). In this work, our DBNs achieved 91% probability of detection (Pd) with 1.4 false alarms per square meter when evaluated on anti-tank and anti-personnel targets at temperate and arid test sites. This research demonstrates that DBNs are a viable approach to detect and classify BEHs.

  17. Spermatozoa motion detection and trajectory tracking algorithm based on orthogonal search

    NASA Astrophysics Data System (ADS)

    Chacon Murguia, Mario I.; Valdez Martinez, Antonio

    1999-10-01

    This paper presents a new algorithm for object motion detection and trajectory tracking. This method was developed as part of a machine vision system for human fertility analysis. Fertility analysis is based on the amount of spermatozoa in semen samples and their type of movement. Two approaches were tested to detect the movement of the spermatozoa, image subtraction, and optical flow. Image subtraction is a simple and fast method but it has some complications to detect individual motion when large amounts of objects are presented. The optical flow method is able to detect motion but it turns to be computationally time expensive. It does not generate a specific trajectory of each spermatozoon, and it does not detect static spermatozoa. The algorithm developed detects object motion through an orthogonal search of blocks in consecutive frames. Matching of two blocks in consecutive frames is defined by square differences. A dynamic control array is used to store the trajectory of each spermatozoon, and to deal with all the different situations in the trajectories like, new spermatozoa entering in a frame, spermatozoa leaving the frame, and spermatozoa collision. The algorithm developed turns out to be faster than the optical flow algorithm and solves the problem of the image subtraction method. It also detects static spermatozoa, and generates a motion vector for each spermatozoon that describes their trajectory.

  18. Damage diagnosis algorithm using a sequential change point detection method with an unknown distribution for damage

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.

    2012-04-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  19. Application of edge detection algorithm for vision guided robotics assembly system

    NASA Astrophysics Data System (ADS)

    Balabantaray, Bunil Kumar; Jha, Panchanand; Biswal, Bibhuti Bhusan

    2013-12-01

    Machine vision system has a major role in making robotic assembly system autonomous. Part detection and identification of the correct part are important tasks which need to be carefully done by a vision system to initiate the process. This process consists of many sub-processes wherein, the image capturing, digitizing and enhancing, etc. do account for reconstructive the part for subsequent operations. Edge detection of the grabbed image, therefore, plays an important role in the entire image processing activity. Thus one needs to choose the correct tool for the process with respect to the given environment. In this paper the comparative study of edge detection algorithm with grasping the object in robot assembly system is presented. The proposed work is performed on the Matlab R2010a Simulink. This paper proposes four algorithms i.e. Canny's, Robert, Prewitt and Sobel edge detection algorithm. An attempt has been made to find the best algorithm for the problem. It is found that Canny's edge detection algorithm gives better result and minimum error for the intended task.

  20. Target detection algorithm for airborne thermal hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, R.; Kumar, A.; Raju, P. L. N.; Krishna Murthy, Y. V. N.

    2014-11-01

    Airborne hyperspectral imaging is constantly being used for classification purpose. But airborne thermal hyperspectral image usually is a challenge for conventional classification approaches. The Telops Hyper-Cam sensor is an interferometer-based imaging system that helps in the spatial and spectral analysis of targets utilizing a single sensor. It is based on the technology of Fourier-transform which yields high spectral resolution and enables high accuracy radiometric calibration. The Hypercam instrument has 84 spectral bands in the 868 cm-1 to 1280 cm-1 region (7.8 μm to 11.5 μm), at a spectral resolution of 6 cm-1 (full-width-half-maximum) for LWIR (long wave infrared) range. Due to the Hughes effect, only a few classifiers are able to handle high dimensional classification task. MNF (Minimum Noise Fraction) rotation is a data dimensionality reducing approach to segregate noise in the data. In this, the component selection of minimum noise fraction (MNF) rotation transformation was analyzed in terms of classification accuracy using constrained energy minimization (CEM) algorithm as a classifier for Airborne thermal hyperspectral image and for the combination of airborne LWIR hyperspectral image and color digital photograph. On comparing the accuracy of all the classified images for airborne LWIR hyperspectral image and combination of Airborne LWIR hyperspectral image with colored digital photograph, it was found that accuracy was highest for MNF component equal to twenty. The accuracy increased by using the combination of airborne LWIR hyperspectral image with colored digital photograph instead of using LWIR data alone.

  1. Algorithms for Detecting Significantly Mutated Pathways in Cancer

    NASA Astrophysics Data System (ADS)

    Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.

    Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We

  2. Fault detection of aircraft system with random forest algorithm and similarity measure.

    PubMed

    Lee, Sanghyuk; Park, Wookje; Jung, Sikhang

    2014-01-01

    Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV) that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA); RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained. PMID:25057508

  3. Wavelet based edge detection algorithm for web surface inspection of coated board web

    NASA Astrophysics Data System (ADS)

    Barjaktarovic, M.; Petricevic, S.

    2010-07-01

    This paper presents significant improvement of the already installed vision system. System was designed for real time coated board inspection. The improvement is achieved with development of a new algorithm for edge detection. The algorithm is based on the redundant (undecimated) wavelet transform. Compared to the existing algorithm better delineation of edges is achieved. This yields to better defect detection probability and more accurate geometrical classification, which will provide additional reduction of waste. Also, algorithm will provide detailed classification and more reliably tracking of defects. This improvement requires minimal changes in processing hardware, only a replacement of the graphic card would be needed, adding only negligibly to the system cost. Other changes are accomplished entirely in the image processing software.

  4. [Two Data Inversion Algorithms of Aerosol Horizontal Distributiol Detected by MPL and Error Analysis].

    PubMed

    Lü, Li-hui; Liu, Wen-qing; Zhang, Tian-shu; Lu, Yi-huai; Dong, Yun-sheng; Chen, Zhen-yi; Fan, Guang-qiang; Qi, Shao-shuai

    2015-07-01

    Atmospheric aerosols have important impacts on human health, the environment and the climate system. Micro Pulse Lidar (MPL) is a new effective tool for detecting atmosphere aerosol horizontal distribution. And the extinction coefficient inversion and error analysis are important aspects of data processing. In order to detect the horizontal distribution of atmospheric aerosol near the ground, slope and Fernald algorithms were both used to invert horizontal MPL data and then the results were compared. The error analysis showed that the error of the slope algorithm and Fernald algorithm were mainly from theoretical model and some assumptions respectively. Though there still some problems exist in those two horizontal extinction coefficient inversions, they can present the spatial and temporal distribution of aerosol particles accurately, and the correlations with the forward-scattering visibility sensor are both high with the value of 95%. Furthermore relatively speaking, Fernald algorithm is more suitable for the inversion of horizontal extinction coefficient. PMID:26717723

  5. Combining algorithms in automatic detection of QRS complexes in ECG signals.

    PubMed

    Meyer, Carsten; Fernández Gavela, José; Harris, Matthew

    2006-07-01

    QRS complex and specifically R-Peak detection is the crucial first step in every automatic electrocardiogram analysis. Much work has been carried out in this field, using various methods ranging from filtering and threshold methods, through wavelet methods, to neural networks and others. Performance is generally good, but each method has situations where it fails. In this paper, we suggest an approach to automatically combine different QRS complex detection algorithms, here the Pan-Tompkins and wavelet algorithms, to benefit from the strengths of both methods. In particular, we introduce parameters allowing to balance the contribution of the individual algorithms; these parameters are estimated in a data-driven way. Experimental results and analysis are provided on the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia Database. We show that our combination approach outperforms both individual algorithms. PMID:16871713

  6. Detection of Human Impacts by an Adaptive Energy-Based Anisotropic Algorithm

    PubMed Central

    Prado-Velasco, Manuel; Ortiz Marín, Rafael; del Rio Cidoncha, Gloria

    2013-01-01

    Boosted by health consequences and the cost of falls in the elderly, this work develops and tests a novel algorithm and methodology to detect human impacts that will act as triggers of a two-layer fall monitor. The two main requirements demanded by socio-healthcare providers—unobtrusiveness and reliability—defined the objectives of the research. We have demonstrated that a very agile, adaptive, and energy-based anisotropic algorithm can provide 100% sensitivity and 78% specificity, in the task of detecting impacts under demanding laboratory conditions. The algorithm works together with an unsupervised real-time learning technique that addresses the adaptive capability, and this is also presented. The work demonstrates the robustness and reliability of our new algorithm, which will be the basis of a smart falling monitor. This is shown in this work to underline the relevance of the results. PMID:24157505

  7. An Individual Tree Detection Algorithm for Dense Deciduous Forests with Spreading Branches

    NASA Astrophysics Data System (ADS)

    Shao, G.

    2015-12-01

    Individual tree information derived from LiDAR may have the potential to assist forest inventory and improve the assessment of forest structure and composition for sustainable forest management. The algorithms developed for individual tree detection are commonly focusing on finding tree tops to allocation the tree positions. However, the spreading branches (cylinder crowns) in deciduous forests cause such kind of algorithms work less effectively on dense canopy. This research applies a machine learning algorithm, mean shift, to position individual trees based on the density of LiDAR point could instead of detecting tree tops. The study site locates in a dense oak forest in Indiana, US. The selection of mean shift kernels is discussed. The constant and dynamic bandwidths of mean shit algorithms are applied and compared.

  8. A Speech Endpoint Detection Algorithm Based on BP Neural Network and Multiple Features

    NASA Astrophysics Data System (ADS)

    Shi, Yong-Qiang; Li, Ru-Wei; Zhang, Shuang; Wang, Shuai; Yi, Xiao-Qun

    Focusing on a sharp decline in the performance of endpoint detection algorithm in a complicated noise environment, a new speech endpoint detection method based on BPNN (back propagation neural network) and multiple features is presented. Firstly, maximum of short-time autocorrelation function and spectrum variance of speech signals are extracted respectively. Secondly, these feature vectors as the input of BP neural network are trained and modeled and then the Genetic Algorithm is used to optimize the BP Neural Network. Finally, the signal's type is determined according to the output of Neural Network. The experiments show that the correct rate of this proposed algorithm is improved, because this method has better robustness and adaptability than algorithm based on maximum of short-time autocorrelation function or spectrum variance.

  9. A new method to measure low-order aberrations based on wavefront slope

    NASA Astrophysics Data System (ADS)

    Zhou, Qiong; Liu, Wenguang; Jiang, Zongfu

    2015-05-01

    In this paper we discuss a new method to detect low-order aberration with large peak-valley value. This method also depends on wavefront slope measurements but only need measurements of 6 spots, which means that only 6 pieces of lens are used in detective process, and the mathematical algorithm involved in the calculation process is different from zonal or modal estimation used in Shack-Hartmann Wavefront Sensor. To evaluate the accuracy of this method we simulate this optical measurement process by using the Zemax simulation software and Matlab calculation software. Simulation results show that the reconstructed errors of Zernike aberration coefficients are higher for a larger peakvalley (PV) value of wavefront distortions. The maximal errors of aberration coefficients can be keep lower than 1% for aberrations with different combinations of defocus, astigmatism at 0° ,astigmatism at 45° and some high-order terms.. The new measurement method can be used to direct measure low-order aberrations for laser beam with large transverse area and do not need beam contracting system.

  10. Cluster-based spike detection algorithm adapts to interpatient and intrapatient variation in spike morphology.

    PubMed

    Nonclercq, Antoine; Foulon, Martine; Verheulpen, Denis; De Cock, Cathy; Buzatu, Marga; Mathys, Pierre; Van Bogaert, Patrick

    2012-09-30

    Visual quantification of interictal epileptiform activity is time consuming and requires a high level of expert's vigilance. This is especially true for overnight recordings of patient suffering from epileptic encephalopathy with continuous spike and waves during slow-wave sleep (CSWS) as they can show tens of thousands of spikes. Automatic spike detection would be attractive for this condition, but available algorithms have methodological limitations related to variation in spike morphology both between patients and within a single recording. We propose a fully automated method of interictal spike detection that adapts to interpatient and intrapatient variation in spike morphology. The algorithm works in five steps. (1) Spikes are detected using parameters suitable for highly sensitive detection. (2) Detected spikes are separated into clusters. (3) The number of clusters is automatically adjusted. (4) Centroids are used as templates for more specific spike detections, therefore adapting to the types of spike morphology. (5) Detected spikes are summed. The algorithm was evaluated on EEG samples from 20 children suffering from epilepsy with CSWS. When compared to the manual scoring of 3 EEG experts (3 records), the algorithm demonstrated similar performance since sensitivity and selectivity were 0.3% higher and 0.4% lower, respectively. The algorithm showed little difference compared to the manual scoring of another expert for the spike-and-wave index evaluation in 17 additional records (the mean absolute difference was 3.8%). This algorithm is therefore efficient for the count of interictal spikes and determination of a spike-and-wave index. PMID:22850558

  11. Utilization of advanced clutter suppression algorithms for improved standoff detection and identification of radionuclide threats

    NASA Astrophysics Data System (ADS)

    Cosofret, Bogdan R.; Shokhirev, Kirill; Mulhall, Phil; Payne, David; Harris, Bernard

    2014-05-01

    Technology development efforts seek to increase the capability of detection systems in low Signal-to-Noise regimes encountered in both portal and urban detection applications. We have recently demonstrated significant performance enhancement in existing Advanced Spectroscopic Portals (ASP), Standoff Radiation Detection Systems (SORDS) and handheld isotope identifiers through the use of new advanced detection and identification algorithms. The Poisson Clutter Split (PCS) algorithm is a novel approach for radiological background estimation that improves the detection and discrimination capability of medium resolution detectors. The algorithm processes energy spectra and performs clutter suppression, yielding de-noised gamma-ray spectra that enable significant enhancements in detection and identification of low activity threats with spectral target recognition algorithms. The performance is achievable at the short integration times (0.5 - 1 second) necessary for operation in a high throughput and dynamic environment. PCS has been integrated with ASP, SORDS and RIID units and evaluated in field trials. We present a quantitative analysis of algorithm performance against data collected by a range of systems in several cluttered environments (urban and containerized) with embedded check sources. We show that the algorithm achieves a high probability of detection/identification with low false alarm rates under low SNR regimes. For example, utilizing only 4 out of 12 NaI detectors currently available within an ASP unit, PCS processing demonstrated Pd,ID > 90% at a CFAR (Constant False Alarm Rate) of 1 in 1000 occupancies against weak activity (7 - 8μCi) and shielded sources traveling through the portal at 30 mph. This vehicle speed is a factor of 6 higher than was previously possible and results in significant increase in system throughput and overall performance.

  12. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data.

    PubMed

    Goldstein, Markus; Uchida, Seiichi

    2016-01-01

    Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. PMID:27093601

  13. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data

    PubMed Central

    Goldstein, Markus; Uchida, Seiichi

    2016-01-01

    Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. PMID:27093601

  14. A new approach to optic disc detection in human retinal images using the firefly algorithm.

    PubMed

    Rahebi, Javad; Hardalaç, Fırat

    2016-03-01

    There are various methods and algorithms to detect the optic discs in retinal images. In recent years, much attention has been given to the utilization of the intelligent algorithms. In this paper, we present a new automated method of optic disc detection in human retinal images using the firefly algorithm. The firefly intelligent algorithm is an emerging intelligent algorithm that was inspired by the social behavior of fireflies. The population in this algorithm includes the fireflies, each of which has a specific rate of lighting or fitness. In this method, the insects are compared two by two, and the less attractive insects can be observed to move toward the more attractive insects. Finally, one of the insects is selected as the most attractive, and this insect presents the optimum response to the problem in question. Here, we used the light intensity of the pixels of the retinal image pixels instead of firefly lightings. The movement of these insects due to local fluctuations produces different light intensity values in the images. Because the optic disc is the brightest area in the retinal images, all of the insects move toward brightest area and thus specify the location of the optic disc in the image. The results of implementation show that proposed algorithm could acquire an accuracy rate of 100 % in DRIVE dataset, 95 % in STARE dataset, and 94.38 % in DiaRetDB1 dataset. The results of implementation reveal high capability and accuracy of proposed algorithm in the detection of the optic disc from retinal images. Also, recorded required time for the detection of the optic disc in these images is 2.13 s for DRIVE dataset, 2.81 s for STARE dataset, and 3.52 s for DiaRetDB1 dataset accordingly. These time values are average value. PMID:26093773

  15. A real-time implementation of an advanced sensor failure detection, isolation, and accommodation algorithm

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Merrill, W. C.

    1984-01-01

    A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation. Previously announced in STAR as N84-13140

  16. A high-order statistical tensor based algorithm for anomaly detection in hyperspectral imagery.

    PubMed

    Geng, Xiurui; Sun, Kang; Ji, Luyan; Zhao, Yongchao

    2014-01-01

    Recently, high-order statistics have received more and more interest in the field of hyperspectral anomaly detection. However, most of the existing high-order statistics based anomaly detection methods require stepwise iterations since they are the direct applications of blind source separation. Moreover, these methods usually produce multiple detection maps rather than a single anomaly distribution image. In this study, we exploit the concept of coskewness tensor and propose a new anomaly detection method, which is called COSD (coskewness detector). COSD does not need iteration and can produce single detection map. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm. PMID:25366706

  17. Weld quality assessment using an edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, Rajesh

    2010-03-01

    Heat input during the welding process and subsequent re-cooling changes the microstructure, hardness, toughness, and cracking susceptibility in heat affected zone (HAZ). Weld quality of a weldment largely depends on the area of HAZ. Determination of exact area of the HAZ by manual stereological methods and conventional visual inspection techniques is a difficult task. These techniques of evaluation are based on approximating the complex shape of HAZ as combination of simplified shapes such as rectangles, triangles etc. In this paper, a filtering scheme based on morphology, thresholding and edge detection is implemented on image of weldments to assess quality of the weld. The HAZ of mild steel specimens welded at different welding parameters by Metal Active Gas/Gas Metal Arc Welding process were compared and presented.

  18. Weld quality assessment using an edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, Rajesh

    2009-12-01

    Heat input during the welding process and subsequent re-cooling changes the microstructure, hardness, toughness, and cracking susceptibility in heat affected zone (HAZ). Weld quality of a weldment largely depends on the area of HAZ. Determination of exact area of the HAZ by manual stereological methods and conventional visual inspection techniques is a difficult task. These techniques of evaluation are based on approximating the complex shape of HAZ as combination of simplified shapes such as rectangles, triangles etc. In this paper, a filtering scheme based on morphology, thresholding and edge detection is implemented on image of weldments to assess quality of the weld. The HAZ of mild steel specimens welded at different welding parameters by Metal Active Gas/Gas Metal Arc Welding process were compared and presented.

  19. Hereditary leiomyomatosis and renal cell carcinoma syndrome-associated renal cancer: recognition of the syndrome by pathologic features and the utility of detecting aberrant succination by immunohistochemistry.

    PubMed

    Chen, Ying-Bei; Brannon, A Rose; Toubaji, Antoun; Dudas, Maria E; Won, Helen H; Al-Ahmadie, Hikmat A; Fine, Samson W; Gopalan, Anuradha; Frizzell, Norma; Voss, Martin H; Russo, Paul; Berger, Michael F; Tickoo, Satish K; Reuter, Victor E

    2014-05-01

    Hereditary leiomyomatosis and renal cell carcinoma (HLRCC) syndrome is an autosomal dominant disorder in which germline mutations of fumarate hydratase (FH) gene confer an increased risk of cutaneous and uterine leiomyomas and renal cancer. HLRCC-associated renal cancer is highly aggressive and frequently presents as a solitary mass. We reviewed the clinicopathologic features of 9 patients with renal tumors presenting as sporadic cases but who were later proven to have FH germline mutations. Histologically, all tumors showed mixed architectural patterns, with papillary as the dominant pattern in only 3 cases. Besides papillary, tubular, tubulopapillary, solid, and cystic elements, 6 of 9 tumors contained collecting duct carcinoma-like areas with infiltrating tubules, nests, or individual cells surrounded by desmoplastic stroma. Prominent tubulocystic carcinoma-like component and sarcomatoid differentiation were identified. Although all tumors exhibited the proposed hallmark of HLRCC (large eosinophilic nucleolus surrounded by a clear halo), this feature was often not uniformly present throughout the tumor. Prior studies have shown that a high level of fumarate accumulated in HLRCC tumor cells causes aberrant succination of cellular proteins by forming a stable chemical modification, S-(2-succino)-cysteine (2SC), which can be detected by immunohistochemistry. We thus explored the utility of detecting 2SC by immunohistochemistry in the differential diagnosis of HLRCC tumors and other high-grade renal tumors and investigated the correlation between 2SC staining and FH molecular alterations. All confirmed HLRCC tumors demonstrated diffuse and strong nuclear and cytoplasmic 2SC staining, whereas all clear cell (184/184, 100%), most high-grade unclassified (93/97, 96%), and the large majority of "type 2" papillary (35/45, 78%) renal cell carcinoma cases showed no 2SC immunoreactivity. A subset of papillary (22%) and rare unclassified (4%) tumors showed patchy or diffuse

  20. Hereditary Leiomyomatosis and Renal Cell Carcinoma Syndrome-associated Renal Cancer: Recognition of the Syndrome by Pathologic Features and the Utility of Detecting Aberrant Succination by Immunohistochemistry

    PubMed Central

    Chen, Ying-Bei; Brannon, A. Rose; Toubaji, Antoun; Dudas, Maria E.; Won, Helen H.; Al-Ahmadie, Hikmat A.; Fine, Samson W.; Gopalan, Anuradha; Frizzell, Norma; Voss, Martin H.; Russo, Paul; Berger, Michael F.; Tickoo, Satish K.; Reuter, Victor E.

    2014-01-01

    Hereditary leiomyomatosis and renal cell carcinoma (HLRCC) syndrome is an autosomal dominant disorder in which germline mutations of fumarate hydratase (FH) gene confer an increased risk of cutaneous and uterine leiomyomas as well as renal cancer. HLRCC-associated renal cancer is highly aggressive, and frequently presents as a solitary mass. We reviewed the clinicopathologic features of 9 patients with renal tumors presenting as sporadic cases, but who were later proven to have FH germline mutations. Histologically, all tumors showed mixed architectural patterns, with papillary as the dominant pattern in only 3 cases. Besides papillary, tubular, tubulopapillary, solid and cystic elements, 6 of 9 tumors contained collecting duct carcinoma-like areas with infiltrating tubules, nests or individual cells surrounded by desmoplastic stroma. Prominent tubulocystic carcinoma-like component and sarcomatoid differentiation were identified. While all tumors exhibited the proposed hallmark of HLRCC (large eosinophilic nucleolus surrounded by a clear halo), this feature was often not uniformly present throughout the tumor. Prior studies have shown that high level of fumarate accumulated in HLRCC tumor cells causes aberrant succination of cellular proteins by forming a stable chemical modification, S-(2-succino)-cysteine (2SC), which can be detected by immunohistochemistry. We thus explored the utility of detecting 2SC by immunohistochemistry in the differential diagnosis of HLRCC tumors and other high-grade renal tumors, and investigated the correlation between 2SC staining and FH molecular alterations. All confirmed HLRCC tumors demonstrated diffuse and strong nuclear and cytoplasmic 2SC staining, while all clear cell (184/184, 100%), most high-grade unclassified RCC (93/97, 96%) and the large majority of type 2 papillary (35/45, 78%) cases showed no 2SC immunoreactivity. A subset of papillary (22%) and rare unclassified (4%) tumors showed patchy or diffuse cytoplasmic

  1. Infrared small target detection based on bilateral filtering algorithm with similarity judgments

    NASA Astrophysics Data System (ADS)

    Li, Yanbei; Li, Yan

    2014-11-01

    Infrared small target detection is part of the key technologies in infrared precision-guided, search and track system. Resulting from the relative distance of the infrared image system and the target is far, the target becomes small, faint and obscure. Furthermore, the interference of background clutter and system noise is intense. To solve the problem of infrared small target detection in a complex background, this paper proposes a bilateral filtering algorithm based on similarity judgments for infrared image background prediction. The algorithm introduces gradient factor and similarity judgment factor into traditional bilateral filtering. The two factors can enhance the accuracy of the algorithm for smooth region. At the same time, spatial proximity coefficients and gray similarity coefficient in the bilateral filtering are all expressed by the first two of McLaughlin expansion, which aiming at reducing the time overhead. Simulation results show that the proposed algorithm can effectively suppress complex background clutter in the infrared image and enhance target signal compared with the improved bilateral filtering algorithm, and it also can improve the signal to noise ratio (SNR) and contrast. Besides, this algorithm can reduce the computation time. In a word, this algorithm has a good background rejection performance.

  2. A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series

    SciTech Connect

    Chandola, Varun; Vatsavai, Raju

    2011-01-01

    Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

  3. A PD control-based QRS detection algorithm for wearable ECG applications.

    PubMed

    Choi, Changmok; Kim, Younho; Shin, Kunsoo

    2012-01-01

    We present a QRS detection algorithm for wearable ECG applications using a proportional-derivative (PD) control. ECG data of arrhythmia have irregular intervals and magnitudes of QRS waves that impede correct QRS detection. To resolve the problem, PD control is applied to avoid missing a small QRS wave followed from a large QRS wave and to avoid falsely detecting noise as QRS waves when an interval between two adjacent QRS waves is large (e.g. bradycardia, pause, and arioventricular block). ECG data was obtained from 78 patients with various cardiovascular diseases and tested for the performance evaluation of the proposed algorithm. The overall sensitivity and positive predictive value were 99.28% and 99.26%, respectively. The proposed algorithm has low computational complexity, so that it can be suitable to apply mobile ECG monitoring system in real time. PMID:23367208

  4. Rain detection and removal algorithm using motion-compensated non-local mean filter

    NASA Astrophysics Data System (ADS)

    Song, B. C.; Seo, S. J.

    2015-03-01

    This paper proposed a novel rain detection and removal algorithm robust against camera motions. It is very difficult to detect and remove rain in video with camera motion. So, most previous works assume that camera is fixed. However, these methods are not useful for application. The proposed algorithm initially detects possible rain streaks by using spatial properties such as luminance and structure of rain streaks. Then, the rain streak candidates are selected based on Gaussian distribution model. Next, a non-rain block matching algorithm is performed between adjacent frames to find similar blocks to each including rain pixels. If the similar blocks to the block are obtained, the rain region of the block is reconstructed by non-local mean (NLM) filtering using the similar neighbors. Experimental results show that the proposed method outperforms previous works in terms of objective and subjective visual quality.

  5. Integration of a Self-Coherence Algorithm into DISAT for Forced Oscillation Detection

    SciTech Connect

    Follum, James D.; Tuffner, Francis K.; Amidan, Brett G.

    2015-03-03

    With the increasing number of phasor measurement units on the power system, behaviors typically not observable on the power system are becoming more apparent. Oscillatory behavior on the power system, notably forced oscillations, are one such behavior. However, the large amounts of data coming from the PMUs makes manually detecting and locating these oscillations difficult. To automate portions of the process, an oscillation detection routine was coded into the Data Integrity and Situational Awareness Tool (DISAT) framework. Integration into the DISAT framework allows forced oscillations to be detected and information about the event provided to operational engineers. The oscillation detection algorithm integrates with the data handling and atypical data detecting capabilities of DISAT, building off of a standard library of functions. This report details that integration with information on the algorithm, some implementation issues, and some sample results from the western United States’ power grid.

  6. Algorithms for real-time fault detection of the Space Shuttle Main Engine

    NASA Astrophysics Data System (ADS)

    Ruiz, C. A.; Hawman, M. W.; Galinaitis, W. S.

    1992-07-01

    This paper reports on the results of a program to develop and demonstrate concepts related to a realtime health management system (HMS) for the Space Shuttle Main Engine (SSME). An HMS framework was developed on the basis of a top-down analysis of the current rocket engine failure modes and the engine monitoring requirements. One result of Phase I of this program was the identification of algorithmic approaches for detecting failures of the SSME. Three different analytical techniques were developed which demonstrated the capability to detect failures significantly earlier than the existing redlines. Based on promising initial results, Phase II of the program was initiated to further validate and refine the fault detection strategy on a large data base of 140 SSME test firings, and implement the resultant algorithms in real time. The paper begins with an overview of the refined algorithms used to detect failures during SSME start-up and main-stage operation. Results of testing these algorithms on a data base of nominal and off-nominal SSME test firings is discussed. The paper concludes with a discussion of the performance of the algorithms operating on a real-time computer system.

  7. An ant colony based algorithm for overlapping community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Yanheng; Zhang, Jindong; Liu, Tuming; Zhang, Di

    2015-06-01

    Community detection is of great importance to understand the structures and functions of networks. Overlap is a significant feature of networks and overlapping community detection has attracted an increasing attention. Many algorithms have been presented to detect overlapping communities. In this paper, we present an ant colony based overlapping community detection algorithm which mainly includes ants' location initialization, ants' movement and post processing phases. An ants' location initialization strategy is designed to identify initial location of ants and initialize label list stored in each node. During the ants' movement phase, the entire ants move according to the transition probability matrix, and a new heuristic information computation approach is redefined to measure similarity between two nodes. Every node keeps a label list through the cooperation made by ants until a termination criterion is reached. A post processing phase is executed on the label list to get final overlapping community structure naturally. We illustrate the capability of our algorithm by making experiments on both synthetic networks and real world networks. The results demonstrate that our algorithm will have better performance in finding overlapping communities and overlapping nodes in synthetic datasets and real world datasets comparing with state-of-the-art algorithms.

  8. An online spike detection and spike classification algorithm capable of instantaneous resolution of overlapping spikes

    PubMed Central

    Natora, Michal; Boucsein, Clemens; Munk, Matthias H. J.; Obermayer, Klaus

    2009-01-01

    For the analysis of neuronal cooperativity, simultaneously recorded extracellular signals from neighboring neurons need to be sorted reliably by a spike sorting method. Many algorithms have been developed to this end, however, to date, none of them manages to fulfill a set of demanding requirements. In particular, it is desirable to have an algorithm that operates online, detects and classifies overlapping spikes in real time, and that adapts to non-stationary data. Here, we present a combined spike detection and classification algorithm, which explicitly addresses these issues. Our approach makes use of linear filters to find a new representation of the data and to optimally enhance the signal-to-noise ratio. We introduce a method called “Deconfusion” which de-correlates the filter outputs and provides source separation. Finally, a set of well-defined thresholds is applied and leads to simultaneous spike detection and spike classification. By incorporating a direct feedback, the algorithm adapts to non-stationary data and is, therefore, well suited for acute recordings. We evaluate our method on simulated and experimental data, including simultaneous intra/extra-cellular recordings made in slices of a rat cortex and recordings from the prefrontal cortex of awake behaving macaques. We compare the results to existing spike detection as well as spike sorting methods. We conclude that our algorithm meets all of the mentioned requirements and outperforms other methods under realistic signal-to-noise ratios and in the presence of overlapping spikes. PMID:19499318

  9. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    SciTech Connect

    Xu, Songhua; Krauthammer, Prof. Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use.

  10. Peak detection in fiber Bragg grating using a fast phase correlation algorithm

    NASA Astrophysics Data System (ADS)

    Lamberti, A.; Vanlanduit, S.; De Pauw, B.; Berghmans, F.

    2014-05-01

    Fiber Bragg grating sensing principle is based on the exact tracking of the peak wavelength location. Several peak detection techniques have already been proposed in literature. Among these, conventional peak detection (CPD) methods such as the maximum detection algorithm (MDA), do not achieve very high precision and accuracy, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. On the other hand, recently proposed algorithms, like the cross-correlation demodulation algorithm (CCA), are more precise and accurate but require higher computational effort. To overcome these limitations, we developed a novel fast phase correlation algorithm (FPC) which performs as well as the CCA, being at the same time considerably faster. This paper presents the FPC technique and analyzes its performances for different SNR and wavelength resolutions. Using simulations and experiments, we compared the FPC with the MDA and CCA algorithms. The FPC detection capabilities were as precise and accurate as those of the CCA and considerably better than those of the CPD. The FPC computational time was up to 50 times lower than CCA, making the FPC a valid candidate for future implementation in real-time systems.

  11. Algorithms for real-time fault detection of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Ruiz, C. A.; Hawman, M. W.; Galinaitis, W. S.

    1992-01-01

    This paper reports on the results of a program to develop and demonstrate concepts related to a realtime health management system (HMS) for the Space Shuttle Main Engine (SSME). An HMS framework was developed on the basis of a top-down analysis of the current rocket engine failure modes and the engine monitoring requirements. One result of Phase I of this program was the identification of algorithmic approaches for detecting failures of the SSME. Three different analytical techniques were developed which demonstrated the capability to detect failures significantly earlier than the existing redlines. Based on promising initial results, Phase II of the program was initiated to further validate and refine the fault detection strategy on a large data base of 140 SSME test firings, and implement the resultant algorithms in real time. The paper begins with an overview of the refined algorithms used to detect failures during SSME start-up and main-stage operation. Results of testing these algorithms on a data base of nominal and off-nominal SSME test firings is discussed. The paper concludes with a discussion of the performance of the algorithms operating on a real-time computer system.

  12. Ear feature region detection based on a combined image segmentation algorithm-KRM

    NASA Astrophysics Data System (ADS)

    Jiang, Jingying; Zhang, Hao; Zhang, Qi; Lu, Junsheng; Ma, Zhenhe; Xu, Kexin

    2014-02-01

    Scale Invariant Feature Transform SIFT algorithm is widely used for ear feature matching and recognition. However, the application of the algorithm is usually interfered by the non-target areas within the whole image, and the interference would then affect the matching and recognition of ear features. To solve this problem, a combined image segmentation algorithm i.e. KRM was introduced in this paper, As the human ear recognition pretreatment method. Firstly, the target areas of ears were extracted by the KRM algorithm and then SIFT algorithm could be applied to the detection and matching of features. The present KRM algorithm follows three steps: (1)the image was preliminarily segmented into foreground target area and background area by using K-means clustering algorithm; (2)Region growing method was used to merge the over-segmented areas; (3)Morphology erosion filtering method was applied to obtain the final segmented regions. The experiment results showed that the KRM method could effectively improve the accuracy and robustness of ear feature matching and recognition based on SIFT algorithm.

  13. A generalized power-law detection algorithm for humpback whale vocalizations.

    PubMed

    Helble, Tyler A; Ierley, Glenn R; D'Spain, Gerald L; Roch, Marie A; Hildebrand, John A

    2012-04-01

    Conventional detection of humpback vocalizations is often based on frequency summation of band-limited spectrograms under the assumption that energy (square of the Fourier amplitude) is the appropriate metric. Power-law detectors allow for a higher power of the Fourier amplitude, appropriate when the signal occupies a limited but unknown subset of these frequencies. Shipping noise is non-stationary and colored and problematic for many marine mammal detection algorithms. Modifications to the standard power-law form are introduced to minimize the effects of this noise. These same modifications also allow for a fixed detection threshold, applicable to broadly varying ocean acoustic environments. The detection algorithm is general enough to detect all types of humpback vocalizations. Tests presented in this paper show this algorithm matches human detection performance with an acceptably small probability of false alarms (P(FA) < 6%) for even the noisiest environments. The detector outperforms energy detection techniques, providing a probability of detection P(D) = 95% for P(FA) < 5% for three acoustic deployments, compared to P(FA) > 40% for two energy-based techniques. The generalized power-law detector also can be used for basic parameter estimation and can be adapted for other types of transient sounds. PMID:22501048

  14. An Algorithm for 353 Odor Detection Thresholds in Humans

    PubMed Central

    Sánchez-Moreno, Ricardo; Cometto-Muñiz, J. Enrique; Cain, William S.

    2012-01-01

    One hundred and ninety three odor detection thresholds, ODTs, obtained by Nagata using the Japanese triangular bag method can be correlated as log (1/ODT) by a linear equation with R2 = 0.748 and a standard deviation, SD, of 0.830 log units; the latter may be compared with our estimate of 0.66 log units for the self-consistency of Nagata's data. Aldehydes, acids, unsaturated esters, and mercaptans were included in the equation through indicator variables that took into account the higher potency of these compounds. The ODTs obtained by Cometto-Muñiz and Cain, by Cometto-Muñiz and Abraham, and by Hellman and Small could be put on the same scale as those of Nagata to yield a linear equation for 353 ODTs with R2 = 0.759 and SD = 0.819 log units. The compound descriptors are available for several thousand compounds, and can be calculated from structure, so that further ODT values on the Nagata scale can be predicted for a host of volatile or semivolatile compounds. PMID:21976369

  15. Detection of Carious Lesions and Restorations Using Particle Swarm Optimization Algorithm

    PubMed Central

    Naebi, Mohammad; Saberi, Eshaghali; Risbaf Fakour, Sirous; Naebi, Ahmad; Hosseini Tabatabaei, Somayeh; Ansari Moghadam, Somayeh; Bozorgmehr, Elham; Davtalab Behnam, Nasim; Azimi, Hamidreza

    2016-01-01

    Background/Purpose. In terms of the detection of tooth diagnosis, no intelligent detection has been done up till now. Dentists just look at images and then they can detect the diagnosis position in tooth based on their experiences. Using new technologies, scientists will implement detection and repair of tooth diagnosis intelligently. In this paper, we have introduced one intelligent method for detection using particle swarm optimization (PSO) and our mathematical formulation. This method was applied to 2D special images. Using developing of our method, we can detect tooth diagnosis for all of 2D and 3D images. Materials and Methods. In recent years, it is possible to implement intelligent processing of images by high efficiency optimization algorithms in many applications especially for detection of dental caries and restoration without human intervention. In the present work, we explain PSO algorithm with our detection formula for detection of dental caries and restoration. Also image processing helped us to implement our method. And to do so, pictures taken by digital radiography systems of tooth are used. Results and Conclusion. We implement some mathematics formula for fitness of PSO. Our results show that this method can detect dental caries and restoration in digital radiography pictures with the good convergence. In fact, the error rate of this method was 8%, so that it can be implemented for detection of dental caries and restoration. Using some parameters, it is possible that the error rate can be even reduced below 0.5%. PMID:27212947

  16. Detection of Carious Lesions and Restorations Using Particle Swarm Optimization Algorithm.

    PubMed

    Naebi, Mohammad; Saberi, Eshaghali; Risbaf Fakour, Sirous; Naebi, Ahmad; Hosseini Tabatabaei, Somayeh; Ansari Moghadam, Somayeh; Bozorgmehr, Elham; Davtalab Behnam, Nasim; Azimi, Hamidreza

    2016-01-01

    Background/Purpose. In terms of the detection of tooth diagnosis, no intelligent detection has been done up till now. Dentists just look at images and then they can detect the diagnosis position in tooth based on their experiences. Using new technologies, scientists will implement detection and repair of tooth diagnosis intelligently. In this paper, we have introduced one intelligent method for detection using particle swarm optimization (PSO) and our mathematical formulation. This method was applied to 2D special images. Using developing of our method, we can detect tooth diagnosis for all of 2D and 3D images. Materials and Methods. In recent years, it is possible to implement intelligent processing of images by high efficiency optimization algorithms in many applications especially for detection of dental caries and restoration without human intervention. In the present work, we explain PSO algorithm with our detection formula for detection of dental caries and restoration. Also image processing helped us to implement our method. And to do so, pictures taken by digital radiography systems of tooth are used. Results and Conclusion. We implement some mathematics formula for fitness of PSO. Our results show that this method can detect dental caries and restoration in digital radiography pictures with the good convergence. In fact, the error rate of this method was 8%, so that it can be implemented for detection of dental caries and restoration. Using some parameters, it is possible that the error rate can be even reduced below 0.5%. PMID:27212947

  17. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection

    PubMed Central

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Singh, Kh. Manglem; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500

  18. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.

    PubMed

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter. PMID:27127500

  19. Stable algorithm for event detection in event-driven particle dynamics: logical states

    NASA Astrophysics Data System (ADS)

    Strobl, Severin; Bannerman, Marcus N.; Pöschel, Thorsten

    2016-07-01

    Following the recent development of a stable event-detection algorithm for hard-sphere systems, the implications of more complex interaction models are examined. The relative location of particles leads to ambiguity when it is used to determine the interaction state of a particle in stepped potentials, such as the square-well model. To correctly predict the next event in these systems, the concept of an additional state that is tracked separately from the particle position is introduced and integrated into the stable algorithm for event detection.

  20. Automated shock detection and analysis algorithm for space weather application

    NASA Astrophysics Data System (ADS)

    Vorotnikov, Vasiliy S.; Smith, Charles W.; Hu, Qiang; Szabo, Adam; Skoug, Ruth M.; Cohen, Christina M. S.

    2008-03-01

    Space weather applications have grown steadily as real-time data have become increasingly available. Numerous industrial applications have arisen with safeguarding of the power distribution grids being a particular interest. NASA uses short-term and long-term space weather predictions in its launch facilities. Researchers studying ionospheric, auroral, and magnetospheric disturbances use real-time space weather services to determine launch times. Commercial airlines, communication companies, and the military use space weather measurements to manage their resources and activities. As the effects of solar transients upon the Earth's environment and society grow with the increasing complexity of technology, better tools are needed to monitor and evaluate the characteristics of the incoming disturbances. A need is for automated shock detection and analysis methods that are applicable to in situ measurements upstream of the Earth. Such tools can provide advance warning of approaching disturbances that have significant space weather impacts. Knowledge of the shock strength and speed can also provide insight into the nature of the approaching solar transient prior to arrival at the magnetopause. We report on efforts to develop a tool that can find and analyze shocks in interplanetary plasma data without operator intervention. This method will run with sufficient speed to be a practical space weather tool providing useful shock information within 1 min of having the necessary data to ground. The ability to run without human intervention frees space weather operators to perform other vital services. We describe ways of handling upstream data that minimize the frequency of false positive alerts while providing the most complete description of approaching disturbances that is reasonably possible.

  1. Detection of aberrant responding on a personality scale in a military sample: an application of evaluating person fit with two-level logistic regression.

    PubMed

    Woods, Carol M; Oltmanns, Thomas F; Turkheimer, Eric

    2008-06-01

    Person-fit assessment is used to identify persons who respond aberrantly to a test or questionnaire. In this study, S. P. Reise's (2000) method for evaluating person fit using 2-level logistic regression was applied to 13 personality scales of the Schedule for Nonadaptive and Adaptive Personality (SNAP; L. Clark, 1996) that had been administered to military recruits (N = 2,026). Results revealed significant person-fit heterogeneity and indicated that for 5 SNAP scales (Disinhibition, Entitlement, Exhibitionism, Negative Temperament, and Workaholism), the scale was more discriminating for some people than for others. Possible causes of aberrant responding were explored with several covariates. On all 5 scales, severe pathology emerged as a key influence on responses, and there was evidence of differential test functioning with respect to gender, ethnicity, or both. Other potential sources of aberrancy were carelessness, haphazard responding, or uncooperativeness. Social desirability was not as influential as expected. PMID:18557693

  2. Evaluation of the GPU architecture for the implementation of target detection algorithms for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Trigueros-Espinosa, Blas; Vélez-Reyes, Miguel; Santiago-Santiago, Nayda G.; Rosario-Torres, Samuel

    2011-06-01

    Hyperspectral sensors can collect hundreds of images taken at different narrow and contiguously spaced spectral bands. This high-resolution spectral information can be used to identify materials and objects within the field of view of the sensor by their spectral signature, but this process may be computationally intensive due to the large data sizes generated by the hyperspectral sensors, typically hundreds of megabytes. This can be an important limitation for some applications where the detection process must be performed in real time (surveillance, explosive detection, etc.). In this work, we developed a parallel implementation of three state-ofthe- art target detection algorithms (RX algorithm, matched filter and adaptive matched subspace detector) using a graphics processing unit (GPU) based on the NVIDIA® CUDA™ architecture. In addition, a multi-core CPUbased implementation of each algorithm was developed to be used as a baseline for the speedups estimation. We evaluated the performance of the GPU-based implementations using an NVIDIA ® Tesla® C1060 GPU card, and the detection accuracy of the implemented algorithms was evaluated using a set of phantom images simulating traces of different materials on clothing. We achieved a maximum speedup in the GPU implementations of around 20x over a multicore CPU-based implementation, which suggests that applications for real-time detection of targets in HSI can greatly benefit from the performance of GPUs as processing hardware.

  3. Low power multi-camera system and algorithms for automated threat detection

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  4. Algorithms for the detection of chewing behavior in dietary monitoring applications

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Helal, Abdelsalam; Mendez-Vasquez, Andres

    2009-08-01

    The detection of food consumption is key to the implementation of successful behavior modification in support of dietary monitoring and therapy, for example, during the course of controlling obesity, diabetes, or cardiovascular disease. Since the vast majority of humans consume food via mastication (chewing), we have designed an algorithm that automatically detects chewing behaviors in surveillance video of a person eating. Our algorithm first detects the mouth region, then computes the spatiotemporal frequency spectrum of a small perioral region (including the mouth). Spectral data are analyzed to determine the presence of periodic motion that characterizes chewing. A classifier is then applied to discriminate different types of chewing behaviors. Our algorithm was tested on seven volunteers, whose behaviors included chewing with mouth open, chewing with mouth closed, talking, static face presentation (control case), and moving face presentation. Early test results show that the chewing behaviors induce a temporal frequency peak at 0.5Hz to 2.5Hz, which is readily detected using a distance-based classifier. Computational cost is analyzed for implementation on embedded processing nodes, for example, in a healthcare sensor network. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm, and its estimated error. It is shown that chewing detection is possible within a computationally efficient, accurate, and subject-independent framework.

  5. Detection algorithm for glass bottle mouth defect by continuous wavelet transform based on machine vision

    NASA Astrophysics Data System (ADS)

    Qian, Jinfang; Zhang, Changjiang

    2014-11-01

    An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.

  6. A study of algorithm to detect wildfire with edge of smoke plumes

    NASA Astrophysics Data System (ADS)

    Mototani, I.; Kimura, K.; Honma, T.

    2008-12-01

    Recent years, huge wildfires occur in many part of the world. And some researches have proceeded to improve wildfire detection with satellite imagery. Dozier (1981) developed the method that detects hotspot pixel by comparing the pixel with adjacent pixels. After that, Threshold method based on Dozier's approach and Contextual Method using relationship among neighbor pixels were appeared. But each of these algorithms needs more improvement in accuracy. In this study, we formulate a new algorithm with the edges of smoke plumes based on the rule of fire pixels match the origin of smoke plumes, and validate with the truth data. In this algorithm, MODIS band 1 (visible red) is extracted and smoke plumes are accented by histogram stretching. The edges of smoke plumes are extracted. Edge pixels that consist of fire smoke plumes are approximated by least squares method. Finally, the origins of the smoke plumes are determined and fire pixels are detected by the threshold approach. Our method, however, contain a problem that hotspot area shapes often a rectangle under the condition of not so high threshold temperature. In the results of this algorithm applied, it is found that it is easy to detect fire when clouds are not so thick and when smoke shape is visible clearly. On the other hand, false alarms along are detected along coast line and at the high refraction areas on a glacier, cirrocumulus clouds and so on. In addition, excessive detections increase in the low latitude because brightness temperature is raised by sunlight reflection. The wildfires in Alaska were detected well with our method. To validate this result, it is compared with the observational data and the common detection method. The Alaska Fire History Data (AFHD) is observed by Alaska Fire Service frequently, and the AFHD is offered as GIS data. On the other hand, MOD14 is one of the most famous and common methods to detect wildfire. It is calculated easily by MODIS data. Its accuracy rate to detect fire

  7. A new method for mesoscale eddy detection based on watershed segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Lijuan; Dong, Qing; Xue, Cunjin; Hou, Xueyan; Song, Wanjiao

    2014-11-01

    Mesoscale eddies are widely found in the ocean. They play important roles in heat transport, momentum transport, ocean circulation and so on. The automatic detection of mesoscale eddies based on satellite remote sensing images is an important research topic. Some image processing methods have been applied to identify mesoscale eddies such as Canny operator, Hough transform and so forth, but the accuracy of detection was not very ideal. This paper described a new algorithm based on watershed segmentation algorithm for automatic detection of mesoscale eddies from sea level anomaly(SLA) image. Watershed segmentation algorithm has the disadvantage of over-segmentation. It is important to select appropriate markers. In this study, markers were selected from the reconstructed SLA image, which were used to modify the gradient image. Then two parameters, radius and amplitude of eddy, were used to filter the segmentation results. The method was tested on the Northwest Pacific using TOPEX/Poseidon altimeter data. The results are encouraging, showing that this algorithm is applicable for mesoscale eddies and has a good accuracy. This algorithm has a good response to weak edges and extracted eddies have complete and continuous boundaries. The eddy boundaries generally coincide with closed contours of SSH.

  8. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is

  9. Image-based EUVL aberration metrology

    NASA Astrophysics Data System (ADS)

    Fenger, Germain Louis

    A significant factor in the degradation of nanolithographic image fidelity is optical wavefront aberration. As resolution of nanolithography systems increases, effects of wavefront aberrations on aerial image become more influential. The tolerance of such aberrations is governed by the requirements of features that are being imaged, often requiring lenses that can be corrected with a high degree of accuracy and precision. Resolution of lithographic systems is driven by scaling wavelength down and numerical aperture (NA) up. However, aberrations are also affected from the changes in wavelength and NA. Reduction in wavelength or increase in NA result in greater impact of aberrations, where the latter shows a quadratic dependence. Current demands in semiconductor manufacturing are constantly pushing lithographic systems to operate at the diffraction limit; hence, prompting a need to reduce all degrading effects on image properties to achieve maximum performance. Therefore, the need for highly accurate in-situ aberration measurement and correction is paramount. In this work, an approach has been developed in which several targets including phase wheel, phase disk, phase edges, and binary structures are used to generate optical images to detect and monitor aberrations in extreme ultraviolet (EUV) lithographic systems. The benefit of using printed patterns as opposed to other techniques is that the lithography system is tested under standard operating conditions. Mathematical models in conjunction with iterative lithographic simulations are used to determine pupil phase wavefront errors and describe them as combinations of Zernike polynomials.

  10. Automatic face detection and tracking based on Adaboost with camshift algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Long, JianFeng

    2011-10-01

    With the development of information technology, video surveillance is widely used in security monitoring and identity recognition. For most of pure face tracking algorithms are hard to specify the initial location and scale of face automatically, this paper proposes a fast and robust method to detect and track face by combining adaboost with camshift algorithm. At first, the location and scale of face is specified by adaboost algorithm based on Haar-like features and it will be conveyed to the initial search window automatically. Then, we apply camshift algorithm to track face. The experimental results based on OpenCV software yield good results, even in some special circumstances, such as light changing and face rapid movement. Besides, by drawing out the tracking trajectory of face movement, some abnormal behavior events can be analyzed.

  11. CenLP: A centrality-based label propagation algorithm for community detection in networks

    NASA Astrophysics Data System (ADS)

    Sun, Heli; Liu, Jiao; Huang, Jianbin; Wang, Guangtao; Yang, Zhou; Song, Qinbao; Jia, Xiaolin

    2015-10-01

    Community detection is an important work for discovering the structure and features of complex networks. Many existing methods are sensitive to critical user-dependent parameters or time-consuming in practice. In this paper, we propose a novel label propagation algorithm, called CenLP (Centrality-based Label Propagation). The algorithm introduces a new function to measure the centrality of nodes quantitatively without any user interaction by calculating the local density and the similarity with higher density neighbors for each node. Based on the centrality of nodes, we present a new label propagation algorithm with specific update order and node preference to uncover communities in large-scale networks automatically without imposing any prior restriction. Experiments on both real-world and synthetic networks manifest our algorithm retains the simplicity, effectiveness, and scalability of the original label propagation algorithm and becomes more robust and accurate. Extensive experiments demonstrate the superior performance of our algorithm over the baseline methods. Moreover, our detailed experimental evaluation on real-world networks indicates that our algorithm can effectively measure the centrality of nodes in social networks.

  12. Wavelet neural networks initialization using hybridized clustering and harmony search algorithm: Application in epileptic seizure detection

    NASA Astrophysics Data System (ADS)

    Zainuddin, Zarita; Lai, Kee Huong; Ong, Pauline

    2013-04-01

    Artificial neural networks (ANNs) are powerful mathematical models that are used to solve complex real world problems. Wavelet neural networks (WNNs), which were developed based on the wavelet theory, are a variant of ANNs. During the training phase of WNNs, several parameters need to be initialized; including the type of wavelet activation functions, translation vectors, and dilation parameter. The conventional k-means and fuzzy c-means clustering algorithms have been used to select the translation vectors. However, the solution vectors might get trapped at local minima. In this regard, the evolutionary harmony search algorithm, which is capable of searching for near-optimum solution vectors, both locally and globally, is introduced to circumvent this problem. In this paper, the conventional k-means and fuzzy c-means clustering algorithms were hybridized with the metaheuristic harmony search algorithm. In addition to obtaining the estimation of the global minima accurately, these hybridized algorithms also offer more than one solution to a particular problem, since many possible solution vectors can be generated and stored in the harmony memory. To validate the robustness of the proposed WNNs, the real world problem of epileptic seizure detection was presented. The overall classification accuracy from the simulation showed that the hybridized metaheuristic algorithms outperformed the standard k-means and fuzzy c-means clustering algorithms.

  13. The Aberration Corrected SEM

    SciTech Connect

    Joy, David C.

    2005-09-09

    The performance of the conventional low-energy CD-SEM is limited by the aberrations inherent in the probe forming lens. Multi-pole correctors are now available which can reduce or eliminate these aberrations. An SEM equipped with such a corrector offers higher spatial resolution and more probe current from a given electron source, and other aspects of the optical performance are also improved, but the much higher numerical aperture associated with an aberration corrected lens results in a reduction in imaging depth of field.

  14. A hybrid algorithm for multiple change-point detection in continuous measurements

    NASA Astrophysics Data System (ADS)

    Priyadarshana, W. J. R. M.; Polushina, T.; Sofronov, G.

    2013-10-01

    Array comparative genomic hybridization (aCGH) is one of the techniques that can be used to detect copy number variations in DNA sequences. It has been identified that abrupt changes in the human genome play a vital role in the progression and development of many diseases. We propose a hybrid algorithm that utilizes both the sequential techniques and the Cross-Entropy method to estimate the number of change points as well as their locations in aCGH data. We applied the proposed hybrid algorithm to both artificially generated data and real data to illustrate the usefulness of the methodology. Our results show that the proposed algorithm is an effective method to detect multiple change-points in continuous measurements.

  15. Two-stage neural algorithm for defect detection and characterization uses an active thermography

    NASA Astrophysics Data System (ADS)

    Dudzik, Sebastian

    2015-07-01

    In the paper a two-stage neural algorithm for defect detection and characterization is presented. In order to estimate the defect depth two neural networks trained on data obtained using an active thermography were employed. The first stage of the algorithm is developed to detect the defect by a classification neural network. Then the defects depth is estimated using a regressive neural network. In this work the results of experimental investigations and simulations are shown. Further, the sensitivity analysis of the presented algorithm was conducted and the impacts of emissivity error and the ambient temperature error on the depth estimation errors were studied. The results were obtained using a test sample made of material with a low thermal diffusivity.

  16. A New Algorithm for Detection of Cloudiness and Moon Affect Area

    NASA Astrophysics Data System (ADS)

    Dindar, Murat; Helhel, Selcuk; Ünal Akdemir, Kemal

    2016-07-01

    Cloud detection is a crucial issue for observatories already operating and during phase of the site selection. Sky Quality Meter (SQM) devices mostly use to determine parameters of the quality of sky such as cloudiness, light flux. But, those parameters do not give us exact information about the cloudiness and moon affects. In this study we improved a new cloudiness and moon affects area detection algorithm. The algorithm is based on image processing methods and different approaches applied to both day time and night time images to calculate the sky coverage. The new algorithm also implemented with Matlab by using the images taken by all sky camera located at TÜBİTAK National Observatory and results were given.

  17. Effective Echo Detection and Accurate Orbit Estimation Algorithms for Space Debris Radar

    NASA Astrophysics Data System (ADS)

    Isoda, Kentaro; Sakamoto, Takuya; Sato, Toru

    Orbit estimation of space debris, objects of no inherent value orbiting the earth, is a task that is important for avoiding collisions with spacecraft. The Kamisaibara Spaceguard Center radar system was built in 2004 as the first radar facility in Japan devoted to the observation of space debris. In order to detect the smaller debris, coherent integration is effective in improving SNR (Signal-to-Noise Ratio). However, it is difficult to apply coherent integration to real data because the motions of the targets are unknown. An effective algorithm is proposed for echo detection and orbit estimation of the faint echoes from space debris. The characteristics of the evaluation function are utilized by the algorithm. Experiments show the proposed algorithm improves SNR by 8.32dB and enables estimation of orbital parameters accurately to allow for re-tracking with a single radar.

  18. A novel time-domain signal processing algorithm for real time ventricular fibrillation detection

    NASA Astrophysics Data System (ADS)

    Monte, G. E.; Scarone, N. C.; Liscovsky, P. O.; Rotter S/N, P.

    2011-12-01

    This paper presents an application of a novel algorithm for real time detection of ECG pathologies, especially ventricular fibrillation. It is based on segmentation and labeling process of an oversampled signal. After this treatment, analyzing sequence of segments, global signal behaviours are obtained in the same way like a human being does. The entire process can be seen as a morphological filtering after a smart data sampling. The algorithm does not require any ECG digital signal pre-processing, and the computational cost is low, so it can be embedded into the sensors for wearable and permanent applications. The proposed algorithms could be the input signal description to expert systems or to artificial intelligence software in order to detect other pathologies.

  19. A Node Influence Based Label Propagation Algorithm for Community Detection in Networks

    PubMed Central

    Meng, Fanrong; Zhou, Yong; Shi, Mengyu; Sun, Guibin

    2014-01-01

    Label propagation algorithm (LPA) is an extremely fast community detection method and is widely used in large scale networks. In spite of the advantages of LPA, the issue of its poor stability has not yet been well addressed. We propose a novel node influence based label propagation algorithm for community detection (NIBLPA), which improves the performance of LPA by improving the node orders of label updating and the mechanism of label choosing when more than one label is contained by the maximum number of nodes. NIBLPA can get more stable results than LPA since it avoids the complete randomness of LPA. The experimental results on both synthetic and real networks demonstrate that NIBLPA maintains the efficiency of the traditional LPA algorithm, and, at the same time, it has a superior performance to some representative methods. PMID:24999491

  20. Detection of Aberrant Responding on a Personality Scale in a Military Sample: An Application of Evaluating Person Fit with Two-Level Logistic Regression

    ERIC Educational Resources Information Center

    Woods, Carol M.; Oltmanns, Thomas F.; Turkheimer, Eric

    2008-01-01

    Person-fit assessment is used to identify persons who respond aberrantly to a test or questionnaire. In this study, S. P. Reise's (2000) method for evaluating person fit using 2-level logistic regression was applied to 13 personality scales of the Schedule for Nonadaptive and Adaptive Personality (SNAP; L. Clark, 1996) that had been administered…

  1. 3D reconstruction for sinusoidal motion based on different feature detection algorithms

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhang, Jin; Deng, Huaxia; Yu, Liandong

    2015-02-01

    The dynamic testing of structures and components is an important area of research. Extensive researches on the methods of using sensors for vibration parameters have been studied for years. With the rapid development of industrial high-speed camera and computer hardware, the method of using stereo vision for dynamic testing has been the focus of the research since the advantages of non-contact, full-field, high resolution and high accuracy. But in the country there is not much research about the dynamic testing based on stereo vision, and yet few people publish articles about the three-dimensional (3D) reconstruction of feature points in the case of dynamic. It is essential to the following analysis whether it can obtain accurate movement of target objects. In this paper, an object with sinusoidal motion is detected by stereo vision and the accuracy with different feature detection algorithms is investigated. Three different marks including dot, square and circle are stuck on the object and the object is doing sinusoidal motion by vibration table. Then use feature detection algorithm speed-up robust feature (SURF) to detect point, detect square corners by Harris and position the center by Hough transform. After obtaining the pixel coordinate values of the feature point, the stereo calibration parameters are used to achieve three-dimensional reconstruction through triangulation principle. The trajectories of the specific direction according to the vibration frequency and the frequency camera acquisition are obtained. At last, the reconstruction accuracy of different feature detection algorithms is compared.

  2. Development of Outlier detection Algorithm Applicable to a Korean Surge-Gauge

    NASA Astrophysics Data System (ADS)

    Lee, Jun-Whan; Park, Sun-Cheon; Lee, Won-Jin; Lee, Duk Kee

    2016-04-01

    The Korea Meteorological Administration (KMA) is operating a surge-gauge (aerial ultrasonic type) at Ulleung-do to monitor tsunamis. And the National Institute of Meteorological Sciences (NIMS), KMA is developing a tsunami detection and observation system using this surge-gauge. Outliers resulting from a problem with the transmission and extreme events, which change the water level temporarily, are one of the most common discouraging problems in tsunami detection. Unlike a spike, multipoint outliers are difficult to detect clearly. Most of the previous studies used statistic values or signal processing methods such as wavelet transform and filter to detect the multipoint outliers, and used a continuous dataset. However, as the focus moved to a near real-time operation with a dataset that contains gaps, these methods are no longer tenable. In this study, we developed an outlier detection algorithm applicable to the Ulleung-do surge gauge where both multipoint outliers and missing data exist. Although only 9-point data and two arithmetic operations (plus and minus) are used, because of the newly developed keeping method, the algorithm is not only simple and fast but also effective in a non-continuous dataset. We calibrated 17 thresholds and conducted performance tests using the three month data from the Ulleung-do surge gauge. The results show that the newly developed despiking algorithm performs reliably in alleviating the outlier detecting problem.

  3. Detectability Thresholds and Optimal Algorithms for Community Structure in Dynamic Networks

    NASA Astrophysics Data System (ADS)

    Ghasemian, Amir; Zhang, Pan; Clauset, Aaron; Moore, Cristopher; Peel, Leto

    2016-07-01

    The detection of communities within a dynamic network is a common means for obtaining a coarse-grained view of a complex system and for investigating its underlying processes. While a number of methods have been proposed in the machine learning and physics literature, we lack a theoretical analysis of their strengths and weaknesses, or of the ultimate limits on when communities can be detected. Here, we study the fundamental limits of detecting community structure in dynamic networks. Specifically, we analyze the limits of detectability for a dynamic stochastic block model where nodes change their community memberships over time, but where edges are generated independently at each time step. Using the cavity method, we derive a precise detectability threshold as a function of the rate of change and the strength of the communities. Below this sharp threshold, we claim that no efficient algorithm can identify the communities better than chance. We then give two algorithms that are optimal in the sense that they succeed all the way down to this threshold. The first uses belief propagation, which gives asymptotically optimal accuracy, and the second is a fast spectral clustering algorithm, based on linearizing the belief propagation equations. These results extend our understanding of the limits of community detection in an important direction, and introduce new mathematical tools for similar extensions to networks with other types of auxiliary information.

  4. A simple multispectral imaging algorithm for detection of defects on red delicious apples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Purpose: A multispectral algorithm for detection and differentiation of defect and normal Red Delicious apples was developed from analysis of a series of hyperspectral line-scan images. Methods: A fast line-scan hyperspectral imaging system mounted on a conventional apple sorting machine was used t...

  5. New adaptive branch and bound algorithm for hyperspectral waveband selection for chicken skin tumor detection

    NASA Astrophysics Data System (ADS)

    Nakariyakul, Songyot; Casasent, David

    2006-10-01

    Detection of skin tumors on chicken carcasses is considered. A chicken skin tumor consists of an ulcerous lesion region surrounded by a region of thickened-skin. We use a new adaptive branch-and-bound (ABB) feature selection algorithm to choose only a few useful wavebands from hyperspectral data for use in a real-time multispectral camera. The ABB algorithm selects an optimal feature subset and is shown to be much faster than any other versions of the branch and bound algorithm. We found that the spectral responses of the lesion and the thickened-skin regions of tumors are considerably different; thus we train our feature selection algorithm to separately detect the lesion regions and thickened-skin regions of tumors. We then fuse the two HS detection results of lesion and thickened-skin regions to reduce false alarms. Initial results on six hyperspectral cubes show that our method gives an excellent tumor detection rate and a low false alarm rate.

  6. Credit card fraud detection: An application of the gene expression messy genetic algorithm

    SciTech Connect

    Kargupta, H.; Gattiker, J.R.; Buescher, K.

    1996-05-01

    This paper describes an application of the recently introduced gene expression messy genetic algorithm (GEMGA) (Kargupta, 1996) for detecting fraudulent transactions of credit cards. It also explains the fundamental concepts underlying the GEMGA in the light of the SEARCH (Search Envisioned As Relation and Class Hierarchizing) (Kargupta, 1995) framework.

  7. Detection of the arcuate fasciculus in congenital amusia depends on the tractography algorithm

    PubMed Central

    Chen, Joyce L.; Kumar, Sukhbinder; Williamson, Victoria J.; Scholz, Jan; Griffiths, Timothy D.; Stewart, Lauren

    2015-01-01

    The advent of diffusion magnetic resonance imaging (MRI) allows researchers to virtually dissect white matter fiber pathways in the brain in vivo. This, for example, allows us to characterize and quantify how fiber tracts differ across populations in health and disease, and change as a function of training. Based on diffusion MRI, prior literature reports the absence of the arcuate fasciculus (AF) in some control individuals and as well in those with congenital amusia. The complete absence of such a major anatomical tract is surprising given the subtle impairments that characterize amusia. Thus, we hypothesize that failure to detect the AF in this population may relate to the tracking algorithm used, and is not necessarily reflective of their phenotype. Diffusion data in control and amusic individuals were analyzed using three different tracking algorithms: deterministic and probabilistic, the latter either modeling two or one fiber populations. Across the three algorithms, we replicate prior findings of a left greater than right AF volume, but do not find group differences or an interaction. We detect the AF in all individuals using the probabilistic 2-fiber model, however, tracking failed in some control and amusic individuals when deterministic tractography was applied. These findings show that the ability to detect the AF in our sample is dependent on the type of tractography algorithm. This raises the question of whether failure to detect the AF in prior studies may be unrelated to the underlying anatomy or phenotype. PMID:25653637

  8. Sideband Algorithm for Automatic Wind Turbine Gearbox Fault Detection and Diagnosis: Preprint

    SciTech Connect

    Zappala, D.; Tavner, P.; Crabtree, C.; Sheng, S.

    2013-01-01

    Improving the availability of wind turbines (WT) is critical to minimize the cost of wind energy, especially for offshore installations. As gearbox downtime has a significant impact on WT availabilities, the development of reliable and cost-effective gearbox condition monitoring systems (CMS) is of great concern to the wind industry. Timely detection and diagnosis of developing gear defects within a gearbox is an essential part of minimizing unplanned downtime of wind turbines. Monitoring signals from WT gearboxes are highly non-stationary as turbine load and speed vary continuously with time. Time-consuming and costly manual handling of large amounts of monitoring data represent one of the main limitations of most current CMSs, so automated algorithms are required. This paper presents a fault detection algorithm for incorporation into a commercial CMS for automatic gear fault detection and diagnosis. The algorithm allowed the assessment of gear fault severity by tracking progressive tooth gear damage during variable speed and load operating conditions of the test rig. Results show that the proposed technique proves efficient and reliable for detecting gear damage. Once implemented into WT CMSs, this algorithm can automate data interpretation reducing the quantity of information that WT operators must handle.

  9. Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection

    ERIC Educational Resources Information Center

    Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas

    2011-01-01

    Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…

  10. Transverse chromatic aberration after corneal refractive surgery

    NASA Astrophysics Data System (ADS)

    Anera, R. G.; Jiménez, J. R.; Jiménez del Barco, L.; Hita, E.

    2005-05-01

    An expression has been deduced theoretically from a schematic-eye model, for the transverse or lateral chromatic aberration (TCA) after refractive surgery. The aim was to investigate analytically how chromatic aberration varies after the emmetropization process. These changes in the TCA have been characterized from changes in corneal asphericity. The results indicate that TCA after refractive surgery diminishes as the degree of myopia increases, a trend contrary to that occurring with monochromatic aberrations, such as spherical or coma. These results can explain the fact that the real deterioration of the visual function under photopic conditions detected in those operated on for myopia is less than expected when only monochromatic aberrations are taken into account.

  11. Fast multi-scale edge detection algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zang, Jie; Song, Yanjun; Li, Shaojuan; Luo, Guoyun

    2011-11-01

    The traditional edge detection algorithms have certain noise amplificat ion, making there is a big error, so the edge detection ability is limited. In analysis of the low-frequency signal of image, wavelet analysis theory can reduce the time resolution; under high time resolution for high-frequency signal of the image, it can be concerned about the transient characteristics of the signal to reduce the frequency resolution. Because of the self-adaptive for signal, the wavelet transform can ext ract useful informat ion from the edge of an image. The wavelet transform is at various scales, wavelet transform of each scale provides certain edge informat ion, so called mult i-scale edge detection. Multi-scale edge detection is that the original signal is first polished at different scales, and then detects the mutation of the original signal by the first or second derivative of the polished signal, and the mutations are edges. The edge detection is equivalent to signal detection in different frequency bands after wavelet decomposition. This article is use of this algorithm which takes into account both details and profile of image to detect the mutation of the signal at different scales, provided necessary edge information for image analysis, target recognition and machine visual, and achieved good results.

  12. Novel algorithm for coexpression detection in time-varying microarray data sets.

    PubMed

    Yin, Zong-Xian; Chiang, Jung-Hsien

    2008-01-01

    When analyzing the results of microarray experiments, biologists generally use unsupervised categorization tools. However, such tools regard each time point as an independent dimension and utilize the Euclidean distance to compute the similarities between expressions. Furthermore, some of these methods require the number of clusters to be determined in advance, which is clearly impossible in the case of a new dataset. Therefore, this study proposes a novel scheme, designated as the Variation-based Coexpression Detection (VCD) algorithm, to analyze the trends of expressions based on their variation over time. The proposed algorithm has two advantages. First, it is unnecessary to determine the number of clusters in advance since the algorithm automatically detects those genes whose profiles are grouped together and creates patterns for these groups. Second, the algorithm features a new measurement criterion for calculating the degree of change of the expressions between adjacent time points and evaluating their trend similarities. Three real-world microarray datasets are employed to evaluate the performance of the proposed algorithm. PMID:18245881

  13. Detecting compact galactic binaries using a hybrid swarm-based algorithm

    NASA Astrophysics Data System (ADS)

    Bouffanais, Yann; Porter, Edward K.

    2016-03-01

    Compact binaries in our galaxy are expected to be one of the main sources of gravitational waves for the future eLISA mission. During the mission lifetime, many thousands of galactic binaries should be individually resolved. However, the identification of the sources and the extraction of the signal parameters in a noisy environment are real challenges for data analysis. So far, stochastic searches have proven to be the most successful for this problem. In this work, we present the first application of a swarm-based algorithm combining Particle Swarm Optimization and Differential Evolution. These algorithms have been shown to converge faster to global solutions on complicated likelihood surfaces than other stochastic methods. We first demonstrate the effectiveness of the algorithm for the case of a single binary in a 1-mHz search bandwidth. This interesting problem gave the algorithm plenty of opportunity to fail, as it can be easier to find a strong noise peak rather than the signal itself. After a successful detection of a fictitious low-frequency source, as well as the verification binary RXJ 0806.3 +1527 , we then applied the algorithm to the detection of multiple binaries, over different search bandwidths, in the cases of low and mild source confusion. In all cases, we show that we can successfully identify the sources and recover the true parameters within a 99% credible interval.

  14. An improved algorithm for the automatic detection and characterization of slow eye movements.

    PubMed

    Cona, Filippo; Pizza, Fabio; Provini, Federica; Magosso, Elisa

    2014-07-01

    Slow eye movements (SEMs) are typical of drowsy wakefulness and light sleep. SEMs still lack of systematic physical characterization. We present a new algorithm, which substantially improves our previous one, for the automatic detection of SEMs from the electro-oculogram (EOG) and extraction of SEMs physical parameters. The algorithm utilizes discrete wavelet decomposition of the EOG to implement a Bayes classifier that identifies intervals of slow ocular activity; each slow activity interval is segmented into single SEMs via a template matching method. Parameters of amplitude, duration, velocity are automatically extracted from each detected SEM. The algorithm was trained and validated on sleep onsets and offsets of 20 EOG recordings visually inspected by an expert. Performances were assessed in terms of correctly identified slow activity epochs (sensitivity: 85.12%; specificity: 82.81%), correctly segmented single SEMs (89.08%), and time misalignment (0.49 s) between the automatically and visually identified SEMs. The algorithm proved reliable even in whole sleep (sensitivity: 83.40%; specificity: 72.08% in identifying slow activity epochs; correctly segmented SEMs: 93.24%; time misalignment: 0.49 s). The algorithm, being able to objectively characterize single SEMs, may be a valuable tool to improve knowledge of normal and pathological sleep. PMID:24768562

  15. Unsupervised, low latency anomaly detection of algorithmically generated domain names by generative probabilistic modeling.

    PubMed

    Raghuram, Jayaram; Miller, David J; Kesidis, George

    2014-07-01

    We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511

  16. Unsupervised, low latency anomaly detection of algorithmically generated domain names by generative probabilistic modeling

    PubMed Central

    Raghuram, Jayaram; Miller, David J.; Kesidis, George

    2014-01-01

    We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511

  17. A hyperspectral imagery anomaly detection algorithm based on local three-dimensional orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Wen, Gongjian

    2015-10-01

    Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.

  18. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.

    PubMed

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-01-01

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382

  19. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

    PubMed Central

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-01-01

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382

  20. Currently Realizable Quantum Error Detection/Correction Algorithms for Superconducting Qubits

    NASA Astrophysics Data System (ADS)

    Keane, Kyle; Korotkov, Alexander N.

    2011-03-01

    We investigate the efficiency of simple quantum error correction/detection codes for zero-temperature energy relaxation. We show that standard repetitive codes are not effective for error correction of energy relaxation, but can be efficiently used for quantum error detection. Moreover, only two qubits are necessary for this purpose, in contrast to the minimum of three qubits needed for conventional error correction. We propose and analyze specific two-qubit algorithms for superconducting phase qubits, which are currently realizable and can demonstrate quantum error detection; each algorithm can also be used for quantum error correction of a specific known error. In particular, we analyze needed requirements on experimental parameters and calculate the expected fidelities for these experimental protocols. This work was supported by NSA and IARPA under ARO grant No. W911NF-10-1-0334.

  1. A blind detection scheme based on modified wavelet denoising algorithm for wireless optical communications

    NASA Astrophysics Data System (ADS)

    Li, Ruijie; Dang, Anhong

    2015-10-01

    This paper investigates a detection scheme without channel state information for wireless optical communication (WOC) systems in turbulence induced fading channel. The proposed scheme can effectively diminish the additive noise caused by background radiation and photodetector, as well as the intensity scintillation caused by turbulence. The additive noise can be mitigated significantly using the modified wavelet threshold denoising algorithm, and then, the intensity scintillation can be attenuated by exploiting the temporal correlation of the WOC channel. Moreover, to improve the performance beyond that of the maximum likelihood decision, the maximum a posteriori probability (MAP) criterion is considered. Compared with conventional blind detection algorithm, simulation results show that the proposed detection scheme can improve the signal-to-noise ratio (SNR) performance about 4.38 dB while the bit error rate and scintillation index (SI) are 1×10-6 and 0.02, respectively.

  2. CORDIC algorithm based digital detection technique applied in resonator fiber optic gyroscope

    NASA Astrophysics Data System (ADS)

    Yang, Zhihuai; Jin, Xiaojun; Ma, Huilian; Jin, Zhonghe

    2009-06-01

    A digital detection technique based on the coordinate rotation digital computer (CORDIC) algorithm is proposed for a resonator fiber optic gyroscope (R-FOG). It makes the generation of modulation signal, synchronous demodulation and signal processing in R-FOG to be realized in a single field programmable gate array (FPGA). The frequency synthesis and synchronous detection techniques based on the CORDIC algorithm have been analyzed and designed firstly. The experimental results indicate that the precision of the detection circuit satisfies the requirements for the closed-loop feedback in R-FOG system. The frequency of the laser is locked to the resonance frequency of the fiber ring resonator stably and the open-loop gyro output signal is observed successfully. The dynamic range and the bias drift of the R-FOG are ±1.91 rad/s and 0.005 rad/s over 10 s, respectively.

  3. Algorithm-Based Error Detection Of A Cholesky Factor Updating Systolic Array Using Cordic Processors

    NASA Astrophysics Data System (ADS)

    Chou, S. I.; Rader, Charles M.

    1989-12-01

    Lincoln Laboratory has developed an architecture for a folded linear systolic array using fixed-point CORDIC processors, applicable to adaptive nulling for a radar sidelobe canceler. The algorithm implemented uses triangularization by Givens rotations to solve a least-squares problem in the voltage domain. In this paper, the implementation of an inexpensive algorithm-based error-detection scheme is proposed for this systolic array. Column average checksum encoding is intended to detect most errors caused by the failure of any single arithmetic unit. It retains or almost retains the 100% processor utilization of Lincoln Laboratory's novel design. For the case of 64 degrees of freedom, the increase in time complexity is only 3%. The increase in hardware is mainly two adders and two comparators per CORDIC processor. We believe that the small increase in cost will be amply offset by the improvement in system performance brought about by this error detection.

  4. Application of Artificial Bee Colony algorithm in TEC seismo-ionospheric anomalies detection

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2015-09-01

    In this study, the efficiency of Artificial Bee Colony (ABC) algorithm is investigated to detect the TEC (Total Electron Content) seismo-ionospheric anomalies around the time of some strong earthquakes including Chile (27 February 2010; 01 April 2014), Varzeghan (11 August 2012), Saravan (16 April 2013) and Papua New Guinea (29 March 2015). In comparison with other anomaly detection algorithms, ABC has a number of advantages which can be numerated as (1) detection of discord patterns in a large non linear data during a short time, (2) simplicity, (3) having less control parameters and (4) efficiently for solving multimodal and multidimensional optimization problems. Also the results of this study acknowledge the TEC time-series as a robust earthquake precursor.

  5. Early Seizure Detection Algorithm Based on Intracranial EEG and Random Forest Classification.

    PubMed

    Donos, Cristian; Dümpelmann, Matthias; Schulze-Bonhage, Andreas

    2015-08-01

    The goal of this study is to provide a seizure detection algorithm that is relatively simple to implement on a microcontroller, so it can be used for an implantable closed loop stimulation device. We propose a set of 11 simple time domain and power bands features, computed from one intracranial EEG contact located in the seizure onset zone. The classification of the features is performed using a random forest classifier. Depending on the training datasets and the optimization preferences, the performance of the algorithm were: 93.84% mean sensitivity (100% median sensitivity), 3.03 s mean (1.75 s median) detection delays and 0.33/h mean (0.07/h median) false detections per hour. PMID:26022388

  6. Context exploitation in intelligence, surveillance, and reconnaissance for detection and tracking algorithms

    NASA Astrophysics Data System (ADS)

    Tucker, Jonathan D.; Stanfill, S. Robert

    2015-05-01

    Intelligence, Surveillance, and Reconnaissance (ISR) missions involve complex analysis of sensor data that can benefit from the exploitation of geographically aligned context. In this paper we discuss our approach to utilizing geo-registered imagery and context for the purpose of aiding ISR detection and tracking applications. Specifically this includes rendering context masks on imagery, increasing the speed at which detection algorithms process data, providing a way to intelligently control detection density for given ground areas, identifying difficult traffic terrain, refining peak suppression for congested areas, reducing target center of mass location errors, and increasing track coverage and duration through track prediction error robustness.

  7. Automatic, Real-Time Algorithms for Anomaly Detection in High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Srivastava, A. N.; Nemani, R. R.; Votava, P.

    2008-12-01

    Earth observing satellites are generating data at an unprecedented rate, surpassing almost all other data intensive applications. However, most of the data that arrives from the satellites is not analyzed directly. Rather, multiple scientific teams analyze only a small fraction of the total data available in the data stream. Although there are many reasons for this situation one paramount concern is developing algorithms and methods that can analyze the vast, high dimensional, streaming satellite images. This paper describes a new set of methods that are among the fastest available algorithms for real-time anomaly detection. These algorithms were built to maximize accuracy and speed for a variety of applications in fields outside of the earth sciences. However, our studies indicate that with appropriate modifications, these algorithms can be extremely valuable for identifying anomalies rapidly using only modest computational power. We review two algorithms which are used as benchmarks in the field: Orca, One-Class Support Vector Machines and discuss the anomalies that are discovered in MODIS data taken over the Central California region. We are especially interested in automatic identification of disturbances within the ecosystems (e,g, wildfires, droughts, floods, insect/pest damage, wind damage, logging). We show the scalability of the algorithms and demonstrate that with appropriately adapted technology, the dream of real-time analysis can be made a reality.

  8. A new algorithm for evaluating 3D curvature and curvature gradient for improved fracture detection

    NASA Astrophysics Data System (ADS)

    Di, Haibin; Gao, Dengliang

    2014-09-01

    In 3D seismic interpretation, both curvature and curvature gradient are useful seismic attributes for structure characterization and fault detection in the subsurface. However, the existing algorithms are computationally intensive and limited by the lateral resolution for steeply-dipping formations. This study presents new and robust volume-based algorithms that evaluate both curvature and curvature gradient attributes more accurately and effectively. The algorithms first instantaneously fit a local surface to seismic data and then compute attributes using the spatial derivatives of the built surface. Specifically, the curvature algorithm constructs a quadratic surface by using a rectangle 9-node grid cell, whereas the curvature gradient algorithm builds a cubic surface by using a diamond 13-node grid cell. A dip-steering approach based on 3D complex seismic trace analysis is implemented to enhance the accuracy of surface construction and to reduce computational time. Applications to two 3D seismic surveys demonstrate the accuracy and efficiency of the new curvature and curvature gradient algorithms for characterizing faults and fractures in fractured reservoirs.

  9. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    PubMed Central

    Amudha, P.; Karthik, S.; Sivakumari, S.

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  10. Sequential structural damage diagnosis algorithm using a change point detection method

    NASA Astrophysics Data System (ADS)

    Noh, H.; Rajagopal, R.; Kiremidjian, A. S.

    2013-11-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  11. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  12. Automatic target detection algorithm for foliage-penetrating ultrawideband SAR data using split spectral analysis

    NASA Astrophysics Data System (ADS)

    Damarla, Thyagaraju; Kapoor, Ravinder; Ressler, Marc A.

    1999-07-01

    We present an automatic target detection (ATD) algorithm for foliage penetrating (FOPEN) ultra-wideband (UWB) synthetic aperture radar (SAR) data using split spectral analysis. Split spectral analysis is commonly used in the ultrasonic, non-destructive evaluation of materials using wide band pulses for flaw detection. In this paper, we show the application of split spectral analysis for detecting obscured targets in foliage using UWB pulse returns to discriminate targets from foliage, the data spectrum is split into several bands, namely, 20 to 75, 75 to 150, ..., 825 to 900 MHz. An ATD algorithm is developed based on the relative energy levels in various bands, the number of bands containing significant energy (spread of energy), and chip size (number of crossrange and range bins). The algorithm is tested on the (FOPEN UWB SAR) data of foliage and vehicles obscured by foliage collected at Aberdeen Proving Ground, MD. The paper presents various split spectral parameters used in the algorithm and discusses the rationale for their use.

  13. Algorithms for detecting and predicting influenza outbreaks: metanarrative review of prospective evaluations

    PubMed Central

    Spreco, A; Timpka, T

    2016-01-01

    Objectives Reliable monitoring of influenza seasons and pandemic outbreaks is essential for response planning, but compilations of reports on detection and prediction algorithm performance in influenza control practice are largely missing. The aim of this study is to perform a metanarrative review of prospective evaluations of influenza outbreak detection and prediction algorithms restricted settings where authentic surveillance data have been used. Design The study was performed as a metanarrative review. An electronic literature search was performed, papers selected and qualitative and semiquantitative content analyses were conducted. For data extraction and interpretations, researcher triangulation was used for quality assurance. Results Eight prospective evaluations were found that used authentic surveillance data: three studies evaluating detection and five studies evaluating prediction. The methodological perspectives and experiences from the evaluations were found to have been reported in narrative formats representing biodefence informatics and health policy research, respectively. The biodefence informatics narrative having an emphasis on verification of technically and mathematically sound algorithms constituted a large part of the reporting. Four evaluations were reported as health policy research narratives, thus formulated in a manner that allows the results to qualify as policy evidence. Conclusions Awareness of the narrative format in which results are reported is essential when interpreting algorithm evaluations from an infectious disease control practice perspective. PMID:27154479

  14. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.

    PubMed

    Amudha, P; Karthik, S; Sivakumari, S

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  15. The design and hardware implementation of a low-power real-time seizure detection algorithm

    NASA Astrophysics Data System (ADS)

    Raghunathan, Shriram; Gupta, Sumeet K.; Ward, Matthew P.; Worth, Robert M.; Roy, Kaushik; Irazoqui, Pedro P.

    2009-10-01

    Epilepsy affects more than 1% of the world's population. Responsive neurostimulation is emerging as an alternative therapy for the 30% of the epileptic patient population that does not benefit from pharmacological treatment. Efficient seizure detection algorithms will enable closed-loop epilepsy prostheses by stimulating the epileptogenic focus within an early onset window. Critically, this is expected to reduce neuronal desensitization over time and lead to longer-term device efficacy. This work presents a novel event-based seizure detection algorithm along with a low-power digital circuit implementation. Hippocampal depth-electrode recordings from six kainate-treated rats are used to validate the algorithm and hardware performance in this preliminary study. The design process illustrates crucial trade-offs in translating mathematical models into hardware implementations and validates statistical optimizations made with empirical data analyses on results obtained using a real-time functioning hardware prototype. Using quantitatively predicted thresholds from the depth-electrode recordings, the auto-updating algorithm performs with an average sensitivity and selectivity of 95.3 ± 0.02% and 88.9 ± 0.01% (mean ± SEα = 0.05), respectively, on untrained data with a detection delay of 8.5 s [5.97, 11.04] from electrographic onset. The hardware implementation is shown feasible using CMOS circuits consuming under 350 nW of power from a 250 mV supply voltage from simulations on the MIT 180 nm SOI process.

  16. Unsupervised algorithms for intrusion detection and identification in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2009-05-01

    In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts

  17. Advanced Oil Spill Detection Algorithms For Satellite Based Maritime Environment Monitoring

    NASA Astrophysics Data System (ADS)

    Radius, Andrea; Azevedo, Rui; Sapage, Tania; Carmo, Paulo

    2013-12-01

    During the last years, the increasing pollution occurrence and the alarming deterioration of the environmental health conditions of the sea, lead to the need of global monitoring capabilities, namely for marine environment management in terms of oil spill detection and indication of the suspected polluter. The sensitivity of Synthetic Aperture Radar (SAR) to the different phenomena on the sea, especially for oil spill and vessel detection, makes it a key instrument for global pollution monitoring. The SAR performances in maritime pollution monitoring are being operationally explored by a set of service providers on behalf of the European Maritime Safety Agency (EMSA), which has launched in 2007 the CleanSeaNet (CSN) project - a pan-European satellite based oil monitoring service. EDISOFT, which is from the beginning a service provider for CSN, is continuously investing in R&D activities that will ultimately lead to better algorithms and better performance on oil spill detection from SAR imagery. This strategy is being pursued through EDISOFT participation in the FP7 EC Sea-U project and in the Automatic Oil Spill Detection (AOSD) ESA project. The Sea-U project has the aim to improve the current state of oil spill detection algorithms, through the informative content maximization obtained with data fusion, the exploitation of different type of data/ sensors and the development of advanced image processing, segmentation and classification techniques. The AOSD project is closely related to the operational segment, because it is focused on the automation of the oil spill detection processing chain, integrating auxiliary data, like wind information, together with image and geometry analysis techniques. The synergy between these different objectives (R&D versus operational) allowed EDISOFT to develop oil spill detection software, that combines the operational automatic aspect, obtained through dedicated integration of the processing chain in the existing open source NEST

  18. A discrete artificial bee colony algorithm for detecting transcription factor binding sites in DNA sequences.

    PubMed

    Karaboga, D; Aslan, S

    2016-01-01

    The great majority of biological sequences share significant similarity with other sequences as a result of evolutionary processes, and identifying these sequence similarities is one of the most challenging problems in bioinformatics. In this paper, we present a discrete artificial bee colony (ABC) algorithm, which is inspired by the intelligent foraging behavior of real honey bees, for the detection of highly conserved residue patterns or motifs within sequences. Experimental studies on three different data sets showed that the proposed discrete model, by adhering to the fundamental scheme of the ABC algorithm, produced competitive or better results than other metaheuristic motif discovery techniques. PMID:27173272

  19. Evolution of Testing Algorithms at a University Hospital for Detection of Clostridium difficile Infections

    PubMed Central

    Culbreath, Karissa; Ager, Edward; Nemeyer, Ronald J.; Kerr, Alan

    2012-01-01

    We present the evolution of testing algorithms at our institution in which the C. Diff Quik Chek Complete immunochromatographic cartridge assay determines the presence of both glutamate dehydrogenase and Clostridium difficile toxins A and B as a primary screen for C. difficile infection and indeterminate results (glutamate dehydrogenase positive, toxin A and B negative) are confirmed by the GeneXpert C. difficile PCR assay. This two-step algorithm is a cost-effective method for highly sensitive detection of toxigenic C. difficile. PMID:22718938

  20. Competitive evaluation of failure detection algorithms for strapdown redundant inertial instruments.

    NASA Technical Reports Server (NTRS)

    Wilcox, J. C.

    1973-01-01

    Seven algorithms for failure detection, isolation, and correction of strapdown inertial instruments in the dodecahedron configuration are competitively evaluated by means of a digital computer simulation that provides them with identical inputs. Their performance is compared in terms of orientation errors and computer burden. The analytical foundations of the algorithms are presented. The features that are found to contribute to superior performance are use of a definite logical structure, elimination of interaction between failures, different thresholds for first and second failures, use of the 'parity' test signals, and avoidance of iteration loops.

  1. A new algorithm for detecting cloud height using OMPS/LP measurements

    NASA Astrophysics Data System (ADS)

    Chen, Zhong; DeLand, Matthew; Bhartia, Pawan K.

    2016-03-01

    The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements.

  2. A new morphological anomaly detection algorithm for hyperspectral images and its GPU implementation

    NASA Astrophysics Data System (ADS)

    Paz, Abel; Plaza, Antonio

    2011-10-01

    Anomaly detection is considered a very important task for hyperspectral data exploitation. It is now routinely applied in many application domains, including defence and intelligence, public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent explosion in the amount and dimensionality of hyperspectral imagery, this problem calls for the incorporation of parallel computing techniques. In the past, clusters of computers have offered an attractive solution for fast anomaly detection in hyperspectral data sets already transmitted to Earth. However, these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time, i.e., at the same time as the data is collected by the sensor. An exciting new development in the field of commodity computing is the emergence of commodity graphics processing units (GPUs), which can now bridge the gap towards on-board processing of remotely sensed hyperspectral data. In this paper, we develop a new morphological algorithm for anomaly detection in hyperspectral images along with an efficient GPU implementation of the algorithm. The algorithm is implemented on latest-generation GPU architectures, and evaluated with regards to other anomaly detection algorithms using hyperspectral data collected by NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the World Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in the WTC complex. The proposed GPU implementation achieves real-time performance in the considered case study.

  3. A multi-objective discrete cuckoo search algorithm with local search for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Yanheng; Li, Bin

    2016-03-01

    Detecting community is a challenging task in analyzing networks. Solving community detection problem by evolutionary algorithm is a heated topic in recent years. In this paper, a multi-objective discrete cuckoo search algorithm with local search (MDCL) for community detection is proposed. To the best of our knowledge, it is first time to apply cuckoo search algorithm for community detection. Two objective functions termed as negative ratio association and ratio cut are to be minimized. These two functions can break through the modularity limitation. In the proposed algorithm, the nest location updating strategy and abandon operator of cuckoo are redefined in discrete form. A local search strategy and a clone operator are proposed to obtain the optimal initial population. The experimental results on synthetic and real-world networks show that the proposed algorithm has better performance than other algorithms and can discover the higher quality community structure without prior information.

  4. Assessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module

    NASA Technical Reports Server (NTRS)

    Gay, Robert S.

    2011-01-01

    Orion Crew Module (CM) touchdown detection is critical to activating the post-landing sequence that safe?s the Reaction Control Jets (RCS), ensures that the vehicle remains upright, and establishes communication with recovery forces. In order to accommodate safe landing of an unmanned vehicle or incapacitated crew, an onboard automated detection system is required. An Orion-specific touchdown detection algorithm was developed and evaluated to differentiate landing events from in-flight events. The proposed method will be used to initiate post-landing cutting of the parachute riser lines, to prevent CM rollover, and to terminate RCS jet firing prior to submersion. The RCS jets continue to fire until touchdown to maintain proper CM orientation with respect to the flight path and to limit impact loads, but have potentially hazardous consequences if submerged while firing. The time available after impact to cut risers and initiate the CM Up-righting System (CMUS) is measured in minutes, whereas the time from touchdown to RCS jet submersion is a function of descent velocity, sea state conditions, and is often less than one second. Evaluation of the detection algorithms was performed for in-flight events (e.g. descent under chutes) using hi-fidelity rigid body analyses in the Decelerator Systems Simulation (DSS), whereas water impacts were simulated using a rigid finite element model of the Orion CM in LS-DYNA. Two touchdown detection algorithms were evaluated with various thresholds: Acceleration magnitude spike detection, and Accumulated velocity changed (over a given time window) spike detection. Data for both detection methods is acquired from an onboard Inertial Measurement Unit (IMU) sensor. The detection algorithms were tested with analytically generated in-flight and landing IMU data simulations. The acceleration spike detection proved to be faster while maintaining desired safety margin. Time to RCS jet submersion was predicted analytically across a series of

  5. One-shot and aberration-tolerable homodyne detection for holographic storage readout through double-frequency grating-based lateral shearing interferometry.

    PubMed

    Yu, Yeh-Wei; Xiao, Shuai; Cheng, Chih-Yuan; Sun, Ching-Cherng

    2016-05-16

    A simple method to decode the stored phase signal of volume holographic data storage with adequate wave aberration tolerance is highly demanded. We proposed and demonstrated a one-shot scheme to decode a binary-phase encoding signal through double-frequency-grating based shearing interferometry (DFGSI). The lateral shearing amount is dependent on the focal length of the collimated lens and the frequency difference between the gratings. Diffracted waves with phase encoding were successfully decoded through experimentation. An optical model for the DFGSI was built to analyze phase-error induction and phase-difference control by shifting the double-frequency grating longitudinally and laterally, respectively. The optical model was demonstrated experimentally. Finally, a high aberration tolerance of the DFGSI was demonstrated using the optical model. PMID:27409865

  6. High-resolution detection of recurrent aberrations in lung adenocarcinomas by array comparative genomic hybridization and expression analysis of selective genes by quantitative PCR.

    PubMed

    Zhu, Hong; Wong, Maria Pik; Tin, Vicky

    2014-06-01

    Genomic abnormalities are the hallmark of cancers and may harbor potential candidate genes important for cancer development and progression. We performed array comparative genomic hybridization (array CGH) on 36 cases of primary lung adenocarcinoma (AD) using an array containing 2621 BAC or PAC clones spanning the genome at an average interval of 1 Mb. Array CGH identified the commonest aberrations consisting of DNA gains within 1p, 1q, 5p, 5q, 7p, 7q, 8q, 11q, 12p, 13q, 16p, 17q, 20q, and losses with 6q, 9p, 10q and 18q. High-level copy gains involved mainly 7p21-p15 and 20q13.3. Dual color fluorescence in situ hybridization (FISH) was performed on a selective locus for validation of array CGH results. Genomic aberrations were compared with different clinicopathological features and a trend of higher number of aberrations in tumors with aggressive phenotypes and current tobacco exposure was identified. According to array CGH data, 23 candidate genes were selected for quantitative PCR (qPCR) analysis. The concordance observed between the genomic and expression changes in most of the genes suggested that they could be candidate cancer-related genes that contributed to the development of lung AD. PMID:24728343

  7. Comparison of the period detection algorithms based on Pi of the Sky data

    NASA Astrophysics Data System (ADS)

    Opiela, Rafał; Mankiewicz, Lech; Żarnecki, Aleksander Filip

    2015-09-01

    The Pi of the Sky is a system of five autonomous detectors designed for continuous observation of the night sky, mainly looking for optical flashes of astrophysical origin, in particular for Gamma Ray Bursts (GRB). In the Pi of the Sky project we also study many kinds of variable stars (periods in range of 0.5d - 1000.0d) or take part in the multiwavelength observing campaigns, such as the DG Cvn outburst observations. Our wide field of view robotic telescopes are located in San Pedro the Atacama Observatory, Chile and INTA El Arenosillo Observatory, Spain and were designed for monitoring a large fraction of the sky with 12m -13m range and time resolution of the order of 1 - 10 seconds. During analysis of the variable stars observations very important is accurate determination of their variability parameters. We know a lot of algorithms which can be used to the variability analysis of the observed stars123 . In this article using Monte Carlo analysis we compare all used by us the period detection algorithms dedicated to the astronomical origin data analysis. Based on the tests performed we show which algorithm gives us the best period detection quality and try to derived approximate formula describing the period detection error. We also give some examples of this calculation based on the observed by our detectors variable stars. At the end of this article we show how removing bad measurements from the analysed light curve affect to the accuracy of the period detection.

  8. Improving lesion detectability in PET imaging with a penalized likelihood reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Wangerin, Kristen A.; Ahn, Sangtae; Ross, Steven G.; Kinahan, Paul E.; Manjeshwar, Ravindra M.

    2015-03-01

    Ordered Subset Expectation Maximization (OSEM) is currently the most widely used image reconstruction algorithm for clinical PET. However, OSEM does not necessarily provide optimal image quality, and a number of alternative algorithms have been explored. We have recently shown that a penalized likelihood image reconstruction algorithm using the relative difference penalty, block sequential regularized expectation maximization (BSREM), achieves more accurate lesion quantitation than OSEM, and importantly, maintains acceptable visual image quality in clinical wholebody PET. The goal of this work was to evaluate lesion detectability with BSREM versus OSEM. We performed a twoalternative forced choice study using 81 patient datasets with lesions of varying contrast inserted into the liver and lung. At matched imaging noise, BSREM and OSEM showed equivalent detectability in the lungs, and BSREM outperformed OSEM in the liver. These results suggest that BSREM provides not only improved quantitation and clinically acceptable visual image quality as previously shown but also improved lesion detectability compared to OSEM. We then modeled this detectability study, applying both nonprewhitening (NPW) and channelized Hotelling (CHO) model observers to the reconstructed images. The CHO model observer showed good agreement with the human observers, suggesting that we can apply this model to future studies with varying simulation and reconstruction parameters.

  9. A low computational cost algorithm for REM sleep detection using single channel EEG.

    PubMed

    Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2014-11-01

    The push towards low-power and wearable sleep systems requires using minimum number of recording channels to enhance battery life, keep processing load small and be more comfortable for the user. Since most sleep stages can be identified using EEG traces, enormous power savings could be achieved by using a single channel of EEG. However, detection of REM sleep from one channel EEG is challenging due to its electroencephalographic similarities with N1 and Wake stages. In this paper we investigate a novel feature in sleep EEG that demonstrates high discriminatory ability for detecting REM phases. We then use this feature, that is based on spectral edge frequency (SEF) in the 8-16 Hz frequency band, together with the absolute power and the relative power of the signal, to develop a simple REM detection algorithm. We evaluate the performance of this proposed algorithm with overnight single channel EEG recordings of 5 training and 15 independent test subjects. Our algorithm achieved sensitivity of 83%, specificity of 89% and selectivity of 61% on a test database consisting of 2221 REM epochs. It also achieved sensitivity and selectivity of 81 and 75% on PhysioNet Sleep-EDF database consisting of 8 subjects. These results demonstrate that SEF can be a useful feature for automatic detection of REM stages of sleep from a single channel EEG. PMID:25113231

  10. Detecting a Singleton Attractor in a Boolean Network Utilizing SAT Algorithms

    NASA Astrophysics Data System (ADS)

    Tamura, Takeyuki; Akutsu, Tatsuya

    The Boolean network (BN) is a mathematical model of genetic networks. It is known that detecting a singleton attractor, which is also called a fixed point, is NP-hard even for AND/OR BNs (i.e., BNs consisting of AND/OR nodes), where singleton attractors correspond to steady states. Though a naive algorithm can detect a singleton attractor for an AND/OR BN in O(n 2n) time, no O((2-ε)n) (ε > 0) time algorithm was known even for an AND/OR BN with non-restricted indegree, where n is the number of nodes in a BN. In this paper, we present an O(1.787n) time algorithm for detecting a singleton attractor of a given AND/OR BN, along with related results. We also show that detection of a singleton attractor in a BN with maximum indegree two is NP-hard and can be polynomially reduced to a satisfiability problem.

  11. Identification and detection of gaseous effluents from hyperspectral imagery using invariant algorithms

    NASA Astrophysics Data System (ADS)

    O'Donnell, Erin M.; Messinger, David W.; Salvaggio, Carl; Schott, John R.

    2004-08-01

    The ability to detect and identify effluent gases is, and will continue to be, of great importance. This would not only aid in the regulation of pollutants but also in treaty enforcement and monitoring the production of weapons. Considering these applications, finding a way to remotely investigate a gaseous emission is highly desirable. This research utilizes hyperspectral imagery in the infrared region of the electromagnetic spectrum to evaluate an invariant method of detecting and identifying gases within a scene. The image is evaluated on a pixel-by-pixel basis and is studied at the subpixel level. A library of target gas spectra is generated using a simple slab radiance model. This results in a more robust description of gas spectra which are representative of real-world observations. This library is the subspace utilized by the detection and identification algorithms. The subspace will be evaluated for the set of basis vectors that best span the subspace. The Lee algorithm will be used to determine the set of basis vectors, which implements the Maximum Distance Method (MaxD). A Generalized Likelihood Ratio Test (GLRT) determines whether or not the pixel contains the target. The target can be either a single species or a combination of gases. Synthetically generated scenes will be used for this research. This work evaluates whether the Lee invariant algorithm will be effective in the gas detection and identification problem.

  12. Penalty Dynamic Programming Algorithm for Dim Targets Detection in Sensor Systems

    PubMed Central

    Huang, Dayu; Xue, Anke; Guo, Yunfei

    2012-01-01

    In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations. PMID:22666074

  13. Radiation anomaly detection algorithms for field-acquired gamma energy spectra

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen

    2015-08-01

    The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.

  14. Scalable Algorithms for Unsupervised Classification and Anomaly Detection in Large Geospatiotemporal Data Sets

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    The increasing availability of high-resolution geospatiotemporal datasets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery and mining of ecological data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe some unsupervised knowledge discovery and anomaly detection approaches based on highly scalable parallel algorithms for k-means clustering and singular value decomposition, consider a few practical applications thereof to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  15. Contour detection and completion for inpainting and segmentation based on topological gradient and fast marching algorithms.

    PubMed

    Auroux, Didier; Cohen, Laurent D; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734

  16. A consensus algorithm for approximate string matching and its application to QRS complex detection

    NASA Astrophysics Data System (ADS)

    Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.

    2016-08-01

    In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.

  17. A Fast Overlapping Community Detection Algorithm with Self-Correcting Ability

    PubMed Central

    Lu, Nan

    2014-01-01

    Due to the defects of all kinds of modularity, this paper defines a weighted modularity based on the density and cohesion as the new evaluation measurement. Since the proportion of the overlapping nodes in network is very low, the number of the nodes' repeat visits can be reduced by signing the vertices with the overlapping attributes. In this paper, we propose three test conditions for overlapping nodes and present a fast overlapping community detection algorithm with self-correcting ability, which is decomposed into two processes. Under the control of overlapping properties, the complexity of the algorithm tends to be approximate linear. And we also give a new understanding on membership vector. Moreover, we improve the bridgeness function which evaluates the extent of overlapping nodes. Finally, we conduct the experiments on three networks with well known community structures and the results verify the feasibility and effectiveness of our algorithm. PMID:24757434

  18. A combined algorithm for T-wave alternans qualitative detection and quantitative measurement

    PubMed Central

    2013-01-01

    Background T-wave alternans (TWA) provides a noninvasive and clinically useful marker for the risk of sudden cardiac death (SCD). Current most widely used TWA detection algorithms work in two different domains: time and frequency. The disadvantage of the spectral analytical techniques is that they treat the alternans signal as a stationary wave with a constant amplitude and a phase. They cannot detect non-stationary characteristics of the signal. The temporal domain methods are sensitive to the alignment of the T-waves. In this study, we sought to develop a robust combined algorithm (CA) to assess T-wave alternans, which can qualitatively detect and quantitatively measure TWA in time domain. Methods The T wave sequences were extracted and the total energy of each T wave within the specified time-frequency region was calculated. The rank-sum test was applied to the ranked energy sequences of T waves to detect TWA qualitatively. The ECG containing TWA was quantitatively analyzed with correlation method. Results Simulation test result proved a mean sensitivity of 91.2% in detecting TWA, and for the SNR not less than 30 dB, the accuracy rate of detection achieved 100%. The clinical data experiment showed that the results from this method vs. spectral method had the correlation coefficients of 0.96. Conclusions A novel TWA analysis algorithm utilizing the wavelet transform and correlation technique is presented in this paper. TWAs are not only correctly detected qualitatively in frequency domain by energy value of T waves, but the alternans frequency and amplitude in temporal domain are measured quantitatively. PMID:23311454

  19. The evaluation of failure detection and isolation algorithms for restructurable control

    NASA Technical Reports Server (NTRS)

    Motyka, P.; Bonnice, W.; Hall, S.; Wagner, E.

    1984-01-01

    Three failure detection and identification techniques were compared to determine their usefulness in detecting and isolating failures in an aircraft flight control system; excluding sensor and flight control computer failures. The algorithms considered were the detection filter, the Generalized Likelihood Ratio test and the Orthogonal Series Generalized Likelihood Ratio test. A modification to the basic detection filter is also considered which uses secondary filtering of the residuals to produce unidirectional failure signals. The algorithms were evaluated by testing their ability to detect and isolate control surface failures in a nonlinear simulation of a C-130 aircraft. It was found that failures of some aircraft controls are difficult to distinguish because they have a similar effect on the dynamics of the vehicle. Quantitative measures for evaluating the distinguishability of failures are considered. A system monitoring strategy for implementing the failure detection and identification techniques was considered. This strategy identified the mix of direct measurement of failures versus the computation of failure necessary for implementation of the technology in an aircraft system.

  20. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms

    PubMed Central

    Topouzelis, Konstantinos N.

    2008-01-01

    This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR) for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.

  1. Khoros, coupled with SIMD processor, provides a standard environment for mine detection algorithm evaluation

    NASA Astrophysics Data System (ADS)

    Long, Daniel T.; Hinnerschitz, Scott E.; Sutha, Surachai; Duvoisin, Herbert A., III; Cloud, Eugene L.; Dubey, Abinash C.

    1995-06-01

    Alternative algorithms for detecting and classifying mines and minelike objects must be evaluated against common image sets to assess performance. The Khoros CantataTM environment provides a standard interface that is both powerful and user friendly. It provides the image algorithmist with an object oriented graphical programming interface (GPI. A Khoros user can import 'toolboxes' of specialized image processing primitives for development of high order algorithms. When Khoros is coupled with a high speed single instruction multiple data (SIMD) algorithms. When Khoros is coupled with a high speed single instruction multiple (SIMD) processor, that operates as a co-processor to a Unix workstation, multiple algorithms and images can be rapidly analyzed at high speeds. The Khoros system and toolboxes with SIMD extensions permit rapid description of the algorithm and allow display and evaluation of the intermediate results. The SIMD toolbox extensions mirror the original serial processor's code results with a SIMD drop in replacement routine which is highly accelerated. This allows an algorithmist to develop identical programs/workspace which run on the host workstation without the use of SIMD coprocessor, but of course with a severe speed performance lost. Since a majority of mine detection componenets are extremely 'CPU intensive', it becomes impractical to process a large number of video frames without SIMD assistance. Development of additional SIMD primitives for customized user toolboxes has been greatly simplified in recent years with the advancement of higher order languages for SIMD processors (e.g.: C + +, Ada). The results is a tool that should greatly enhance the scientific productivity of the mine detection community.

  2. A Simple and Robust Event-Detection Algorithm for Single-Cell Impedance Cytometry.

    PubMed

    Caselli, Federica; Bisegna, Paolo

    2016-02-01

    Microfluidic impedance cytometry is emerging as a powerful label-free technique for the characterization of single biological cells. In order to increase the sensitivity and the specificity of the technique, suited digital signal processing methods are required to extract meaningful information from measured impedance data. In this study, a simple and robust event-detection algorithm for impedance cytometry is presented. Since a differential measuring scheme is generally adopted, the signal recorded when a cell passes through the sensing region of the device exhibits a typical odd-symmetric pattern. This feature is exploited twice by the proposed algorithm: first, a preliminary segmentation, based on the correlation of the data stream with the simplest odd-symmetric template, is performed; then, the quality of detected events is established by evaluating their E2O index, that is, a measure of the ratio between their even and odd parts. A thorough performance analysis is reported, showing the robustness of the algorithm with respect to parameter choice and noise level. In terms of sensitivity and positive predictive value, an overall performance of 94.9% and 98.5%, respectively, was achieved on two datasets relevant to microfluidic chips with very different characteristics, considering three noise levels. The present algorithm can foster the role of impedance cytometry in single-cell analysis, which is the new frontier in "Omics." PMID:26241968

  3. Automatic ultrasonic breast lesions detection using support vector machine based algorithm

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuang; Miao, Shan-Jung; Fan, Wei-Che; Chen, Yung-Sheng

    2007-03-01

    It is difficult to automatically detect tumors and extract lesion boundaries in ultrasound images due to the variance in shape, the interference from speckle noise, and the low contrast between objects and background. The enhancement of ultrasonic image becomes a significant task before performing lesion classification, which was usually done with manual delineation of the tumor boundaries in the previous works. In this study, a linear support vector machine (SVM) based algorithm is proposed for ultrasound breast image training and classification. Then a disk expansion algorithm is applied for automatically detecting lesions boundary. A set of sub-images including smooth and irregular boundaries in tumor objects and those in speckle-noised background are trained by the SVM algorithm to produce an optimal classification function. Based on this classification model, each pixel within an ultrasound image is classified into either object or background oriented pixel. This enhanced binary image can highlight the object and suppress the speckle noise; and it can be regarded as degraded paint character (DPC) image containing closure noise, which is well known in perceptual organization of psychology. An effective scheme of removing closure noise using iterative disk expansion method has been successfully demonstrated in our previous works. The boundary detection of ultrasonic breast lesions can be further equivalent to the removal of speckle noise. By applying the disk expansion method to the binary image, we can obtain a significant radius-based image where the radius for each pixel represents the corresponding disk covering the specific object information. Finally, a signal transmission process is used for searching the complete breast lesion region and thus the desired lesion boundary can be effectively and automatically determined. Our algorithm can be performed iteratively until all desired objects are detected. Simulations and clinical images were introduced to

  4. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  5. Near linear time algorithm to detect community structures in large-scale networks

    NASA Astrophysics Data System (ADS)

    Raghavan, Usha Nandini; Albert, Réka; Kumara, Soundar

    2007-09-01

    Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.

  6. Decomposition-Based Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Social Networks

    PubMed Central

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806

  7. Algorithm development for automated outlier detection and background noise reduction during NIR spectroscopic data processing

    NASA Astrophysics Data System (ADS)

    Abookasis, David; Workman, Jerome J.

    2011-09-01

    This study describes a hybrid processing algorithm for use during calibration/validation of near-infrared spectroscopic signals based on a spectra cross-correlation and filtering process, combined with a partial-least square regression (PLS) analysis. In the first step of the algorithm, exceptional signals (outliers) are detected and remove based on spectra correlation criteria we have developed. Then, signal filtering based on direct orthogonal signal correction (DOSC) was applied, before being used in the PLS model, to filter out background variance. After outlier screening and DOSC treatment, a PLS calibration model matrix is formed. Once this matrix has been built, it is used to predict the concentration of the unknown samples. Common statistics such as standard error of cross-validation, mean relative error, coefficient of determination, etc. were computed to assess the fitting ability of the algorithm Algorithm performance was tested on several hundred blood samples prepared at different hematocrit and glucose levels using blood materials from thirteen healthy human volunteers. During measurements, these samples were subjected to variations in temperature, flow rate, and sample pathlength. Experimental results highlight the potential, applicability, and effectiveness of the proposed algorithm in terms of low error of prediction, high sensitivity and specificity, and low false negative (Type II error) samples.

  8. Sensing Phase Aberrations behind Lyot Coronagraphs

    NASA Astrophysics Data System (ADS)

    Sivaramakrishnan, Anand; Soummer, Rémi; Pueyo, Laurent; Wallace, J. Kent; Shao, Michael

    2008-11-01

    Direct detection of young extrasolar planets orbiting nearby stars can be accomplished from the ground with extreme adaptive optics and coronagraphy in the near-infrared, as long as this combination can provide an image with a dynamic range of 107 after the data are processed. Slowly varying speckles due to residual phase aberrations that are not measured by the primary wave-front sensor are the primary obstacle to achieving such a dynamic range. In particular, non-common optical path aberrations occurring between the wave-front sensor and the coronagraphic occulting spot degrade performance the most. We analyze the passage of both low and high spatial frequency phase ripples, as well as low-order Zernike aberrations, through an apodized pupil Lyot coronagraph in order to demonstrate the way coronagraphic filtering affects various aberrations. We derive the coronagraphically induced cutoff frequency of the filtering and estimate coronagraphic contrast losses due to low-order Zernike aberrations: tilt, astigmatism, defocus, coma, and spherical aberration. Such slowly varying path errors can be measured behind a coronagraph and corrected by a slowly updated optical path delay precompensation or offset asserted on the wave front by the adaptive optics (AO) system. We suggest ways of measuring and correcting all but the lowest spatial frequency aberrations using Lyot plane wave-front data, in spite of the complex interaction between the coronagraph and those mid-spatial frequency aberrations that cause image plane speckles near the coronagraphic focal plane mask occulter's edge. This investigation provides guidance for next-generation coronagraphic instruments currently under construction.

  9. Optimized Swinging Door Algorithm for Wind Power Ramp Event Detection: Preprint

    SciTech Connect

    Cui, Mingjian; Zhang, Jie; Florita, Anthony R.; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang

    2015-08-06

    Significant wind power ramp events (WPREs) are those that influence the integration of wind power, and they are a concern to the continued reliable operation of the power grid. As wind power penetration has increased in recent years, so has the importance of wind power ramps. In this paper, an optimized swinging door algorithm (SDA) is developed to improve ramp detection performance. Wind power time series data are segmented by the original SDA, and then all significant ramps are detected and merged through a dynamic programming algorithm. An application of the optimized SDA is provided to ascertain the optimal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas (ERCOT) are used to evaluate the proposed optimized SDA.

  10. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve

    NASA Astrophysics Data System (ADS)

    Xu, Lili; Luo, Shuqian

    2010-11-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  11. A new algorithm for epilepsy seizure onset detection and spread estimation from EEG signals

    NASA Astrophysics Data System (ADS)

    Quintero-Rincón, Antonio; Pereyra, Marcelo; D’Giano, Carlos; Batatia, Hadj; Risk, Marcelo

    2016-04-01

    Appropriate diagnosis and treatment of epilepsy is a main public health issue. Patients suffering from this disease often exhibit different physical characterizations, which result from the synchronous and excessive discharge of a group of neurons in the cerebral cortex. Extracting this information using EEG signals is an important problem in biomedical signal processing. In this work we propose a new algorithm for seizure onset detection and spread estimation in epilepsy patients. The algorithm is based on a multilevel 1-D wavelet decomposition that captures the physiological brain frequency signals coupled with a generalized gaussian model. Preliminary experiments with signals from 30 epilepsy crisis and 11 subjects, suggest that the proposed methodology is a powerful tool for detecting the onset of epilepsy seizures with his spread across the brain.

  12. Runway Safety Monitor Algorithm for Single and Crossing Runway Incursion Detection and Alerting

    NASA Technical Reports Server (NTRS)

    Green, David F., Jr.

    2006-01-01

    The Runway Safety Monitor (RSM) is an aircraft based algorithm for runway incursion detection and alerting that was developed in support of NASA's Runway Incursion Prevention System (RIPS) research conducted under the NASA Aviation Safety and Security Program's Synthetic Vision System project. The RSM algorithm provides warnings of runway incursions in sufficient time for pilots to take evasive action and avoid accidents during landings, takeoffs or when taxiing on the runway. The report documents the RSM software and describes in detail how RSM performs runway incursion detection and alerting functions for NASA RIPS. The report also describes the RIPS flight tests conducted at the Reno/Tahoe International Airport (RNO) and the Wallops Flight Facility (WAL) during July and August of 2004, and the RSM performance results and lessons learned from those flight tests.

  13. An Efficient Moving Target Detection Algorithm Based on Sparsity-Aware Spectrum Estimation

    PubMed Central

    Shen, Mingwei; Wang, Jie; Wu, Di; Zhu, Daiyin

    2014-01-01

    In this paper, an efficient direct data domain space-time adaptive processing (STAP) algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results. PMID:25222035

  14. An efficient moving target detection algorithm based on sparsity-aware spectrum estimation.

    PubMed

    Shen, Mingwei; Wang, Jie; Wu, Di; Zhu, Daiyin

    2014-01-01

    In this paper, an efficient direct data domain space-time adaptive processing (STAP) algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results. PMID:25222035

  15. An effective detection algorithm for region duplication forgery in digital images

    NASA Astrophysics Data System (ADS)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  16. Quantitative detection of defects based on Markov-PCA-BP algorithm using pulsed infrared thermography technology

    NASA Astrophysics Data System (ADS)

    Tang, Qingju; Dai, Jingmin; Liu, Junyan; Liu, Chunsheng; Liu, Yuanlin; Ren, Chunping

    2016-07-01

    Quantitative detection of debonding defects' diameter and depth in TBCs has been carried out using pulsed infrared thermography technology. By combining principal component analysis with neural network theory, the Markov-PCA-BP algorithm was proposed. The principle and realization process of the proposed algorithm was described. In the prediction model, the principal components which can reflect most characteristics of the thermal wave signal were set as the input, and the defect depth and diameter was set as the output. The experimental data from pulsed infrared thermography tests of TBCs with flat bottom hole defects was selected as the training and testing sample. Markov-PCA-BP predictive system was arrived, based on which both the defect depth and diameter were identified accurately, which proved the effectiveness of the proposed method for quantitative detection of debonding defects in TBCs.

  17. Design and evaluation of hyperspectral algorithms for chemical warfare agent detection

    NASA Astrophysics Data System (ADS)

    Manolakis, Dimitris; D'Amico, Francis M.

    2005-11-01

    Remote sensing of chemical warfare agents (CWA) with stand-off hyperspectral imaging sensors has a wide range of civilian and military applications. These sensors exploit the spectral changes in the ambient photon flux produced by either sunlight or the thermal emission of the earth after passage through a region containing the CWA cloud. The purpose of this paper is threefold. First, to discuss a simple phenomenological model for the radiance measured by the sensor in the case of optically thin clouds. This model provides the mathematical framework for the development of optimum algorithms and their analytical evaluation. Second, we identify the fundamental aspects of the data exploitation problem and we develop detection algorithms that can be used by different sensors as long as they can provide the required measurements. Finally, we discuss performance metrics for detection, identification, and quantification and we investigate their dependance on CWA spectral signatures, sensor noise, and background spectral variability.

  18. A lake detection algorithm (LDA) using Landsat 8 data: A comparative approach in glacial environment

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Anshuman; Singh, Mritunjay Kumar; Joshi, P. K.; Snehmani; Singh, Shaktiman; Sam, Lydia; Gupta, R. D.; Kumar, Rajesh

    2015-06-01

    Glacial lakes show a wide range of turbidity. Owing to this, the normalized difference water indices (NDWIs) as proposed by many researchers, do not give appropriate results in case of glacial lakes. In addition, the sub-pixel proportion of water and use of different optical band combinations are also reported to produce varying results. In the wake of the changing climate and increasing GLOFs (glacial lake outburst floods), there is a need to utilize wide optical and thermal capabilities of Landsat 8 data for the automated detection of glacial lakes. In the present study, the optical and thermal bandwidths of Landsat 8 data were explored along with the terrain slope parameter derived from Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model Version2 (ASTER GDEM V2), for detecting and mapping glacial lakes. The validation of the algorithm was performed using manually digitized and subsequently field corrected lake boundaries. The pre-existing NDWIs were also evaluated to determine the supremacy and the stability of the proposed algorithm for glacial lake detection. Two new parameters, LDI (lake detection index) and LF (lake fraction) were proposed to comment on the performances of the indices. The lake detection algorithm (LDA) performed best in case of both, mixed lake pixels and pure lake pixels with no false detections (LDI = 0.98) and very less areal underestimation (LF = 0.73). The coefficient of determination (R2) between areal extents of lake pixels, extracted using the LDA and the actual lake area, was very high (0.99). With understanding of the terrain conditions and slight threshold adjustments, this work can be replicated for any mountainous region of the world.

  19. Detection processing of complex beam-former output data: a new dispersion-based reconditioning algorithm

    NASA Astrophysics Data System (ADS)

    McDonald, Robert J.; Wilbur, JoEllen

    1996-05-01

    Detection processing of the Toroidal Volume Search Sonar beamformer output prior to image formation is used to increase the signal-to-reverberation. The energy detector and sliding matched filter perform adequately at close range but degrade considerably when the reverberation begins to dominate. The skewness matched filter offers some improvement. A dispersion based reconditioning algorithm, introduced in this paper, is shown to provide considerably improvement in the signal-to-reverberation at far range.

  20. Thermographic techniques and adapted algorithms for automatic detection of foreign bodies in food

    NASA Astrophysics Data System (ADS)

    Meinlschmidt, Peter; Maergner, Volker

    2003-04-01

    At the moment foreign substances in food are detected mainly by using mechanical and optical methods as well as ultrasonic technique and than they are removed from the further process. These techniques detect a large portion of the foreign substances due to their different mass (mechanical sieving), their different colour (optical method) and their different surface density (ultrasonic detection). Despite the numerous different methods a considerable portion of the foreign substances remain undetected. In order to recognise materials still undetected, a complementary detection method would be desirable removing the foreign substances not registered by the a.m. methods from the production process. In a project with 13 partner from the food industry, the Fraunhofer - Institut für Holzforschung (WKI) and the Technische Unsiversität are trying to adapt thermography for the detection of foreign bodies in the food industry. After the initial tests turned out to be very promising for the differentiation of food stuffs and foreign substances, more and detailed investigation were carried out to develop suitable algorithms for automatic detection of foreign bodies. In order to achieve -besides the mere visual detection of foreign substances- also an automatic detection under production conditions, numerous experiences in image processing and pattern recognition are exploited. Results for the detection of foreign bodies will be presented at the conference showing the different advantages and disadvantages of using grey - level, statistical and morphological image processing techniques.

  1. Study of Host-Based Cyber Attack Precursor Symptom Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Song, Jae-Gu; Kim, Jong Hyun; Seo, Dongil; Soh, Wooyoung; Kim, Seoksoo

    Botnet-based cyber attacks cause large-scale damage with increasingly intelligent tools, which has called for varied research on bot detection. In this study, we developed a method of monitoring behaviors of host-based processes from the point that a bot header attempts to make zombie PCs, detecting cyber attack precursor symptoms. We designed an algorithm that figures out characteristics of botnet which attempts to launch malicious behaviors by means of signature registration, which is for process/reputation/network traffic/packet/source analysis and a white list, as a measure to respond to bots from the end point.

  2. Aberrant Gene Expression in Humans

    PubMed Central

    Yang, Ence; Ji, Guoli; Brinkmeyer-Langford, Candice L.; Cai, James J.

    2015-01-01

    Gene expression as an intermediate molecular phenotype has been a focus of research interest. In particular, studies of expression quantitative trait loci (eQTL) have offered promise for understanding gene regulation through the discovery of genetic variants that explain variation in gene expression levels. Existing eQTL methods are designed for assessing the effects of common variants, but not rare variants. Here, we address the problem by establishing a novel analytical framework for evaluating the effects of rare or private variants on gene expression. Our method starts from the identification of outlier individuals that show markedly different gene expression from the majority of a population, and then reveals the contributions of private SNPs to the aberrant gene expression in these outliers. Using population-scale mRNA sequencing data, we identify outlier individuals using a multivariate approach. We find that outlier individuals are more readily detected with respect to gene sets that include genes involved in cellular regulation and signal transduction, and less likely to be detected with respect to the gene sets with genes involved in metabolic pathways and other fundamental molecular functions. Analysis of polymorphic data suggests that private SNPs of outlier individuals are enriched in the enhancer and promoter regions of corresponding aberrantly-expressed genes, suggesting a specific regulatory role of private SNPs, while the commonly-occurring regulatory genetic variants (i.e., eQTL SNPs) show little evidence of involvement. Additional data suggest that non-genetic factors may also underlie aberrant gene expression. Taken together, our findings advance a novel viewpoint relevant to situations wherein common eQTLs fail to predict gene expression when heritable, rare inter-individual variation exists. The analytical framework we describe, taking into consideration the reality of differential phenotypic robustness, may be valuable for investigating

  3. Dramatyping: a generic algorithm for detecting reasonable temporal correlations between drug administration and lab value alterations

    PubMed Central

    2016-01-01

    According to the World Health Organization, one of the criteria for the standardized assessment of case causality in adverse drug reactions is the temporal relationship between the intake of a drug and the occurrence of a reaction or a laboratory test abnormality. This article presents and describes an algorithm for the detection of a reasonable temporal correlation between the administration of a drug and the alteration of a laboratory value course. The algorithm is designed to process normalized lab values and is therefore universally applicable. It has a sensitivity of 0.932 for the detection of lab value courses that show changes in temporal correlation with the administration of a drug and it has a specificity of 0.967 for the detection of lab value courses that show no changes. Therefore, the algorithm is appropriate to screen the data of electronic health records and to support human experts in revealing adverse drug reactions. A reference implementation in Python programming language is available. PMID:27042396

  4. Dramatyping: a generic algorithm for detecting reasonable temporal correlations between drug administration and lab value alterations.

    PubMed

    Newe, Axel

    2016-01-01

    According to the World Health Organization, one of the criteria for the standardized assessment of case causality in adverse drug reactions is the temporal relationship between the intake of a drug and the occurrence of a reaction or a laboratory test abnormality. This article presents and describes an algorithm for the detection of a reasonable temporal correlation between the administration of a drug and the alteration of a laboratory value course. The algorithm is designed to process normalized lab values and is therefore universally applicable. It has a sensitivity of 0.932 for the detection of lab value courses that show changes in temporal correlation with the administration of a drug and it has a specificity of 0.967 for the detection of lab value courses that show no changes. Therefore, the algorithm is appropriate to screen the data of electronic health records and to support human experts in revealing adverse drug reactions. A reference implementation in Python programming language is available. PMID:27042396

  5. Algorithm fusion in forward-looking long-wave infrared imagery for buried explosive hazard detection

    NASA Astrophysics Data System (ADS)

    Anderson, D. T.; Keller, James M.; Sjahputera, Ozy

    2011-06-01

    In this article, we propose a method to fuse multiple algorithms in a long wave infrared (LWIR) system in the context of forward looking buried explosive hazard detection. A pre-screener is applied first, which is an ensemble of local RX filters and mean shift clustering in UTM space. Hit correspondence is then performed with an algorithm based on corner detection, local binary patterns (LBP), multiple instance learning (MIL) and mean shift clustering in UTM space. Next, features from image chips are extracted from UTM confidence maps based on maximally stable extremal regions (MSERs) and Gaussian mixture models (GMMs). These sources are then fused using an ordered weighted average (OWA). While this fusion approach has yet to improve the overall positive detection rate in LWIR, we do demonstrate false alarm reduction. Targets that are not detected by our system are also not detected by a human under visual inspection. Experimental results are shown based on field data measurements from a US Army test site.

  6. Target detection in diagnostic ultrasound: Evaluation of a method based on the CLEAN algorithm.

    PubMed

    Masoom, Hassan; Adve, Raviraj S; Cobbold, Richard S C

    2013-02-01

    A technique is proposed for the detection of abnormalities (targets) in ultrasound images using little or no a priori information and requiring little operator intervention. The scheme is a combination of the CLEAN algorithm, originally proposed for radio astronomy, and constant false alarm rate (CFAR) processing, as developed for use in radar systems. The CLEAN algorithm identifies areas in the ultrasound image that stand out above a threshold in relation to the background; CFAR techniques allow for an adaptive, semi-automated, selection of the threshold. Neither appears to have been previously used for target detection in ultrasound images and never together in any context. As a first step towards assessing the potential of this method we used a widely used method of simulating B-mode images (Field II). We assumed the use of a 256 element linear array operating at 3.0MHz into a water-like medium containing a density of point scatterers sufficient to simulate a background of fully developed speckle. Spherical targets with diameters ranging from 0.25 to 6.0mm and contrasts ranging from 0 to 12dB relative to the background were used as test objects. Using a contrast-detail analysis, the probability of detection curves indicate these targets can be consistently detected within a speckle background. Our results indicate that the method has considerable promise for the semi-automated detection of abnormalities with diameters greater than a few millimeters, depending on the contrast. PMID:22853949

  7. A Fast Inspection of Tool Electrode and Drilling Depth in EDM Drilling by Detection Line Algorithm

    PubMed Central

    Huang, Kuo-Yi

    2008-01-01

    The purpose of this study was to develop a novel measurement method using a machine vision system. Besides using image processing techniques, the proposed system employs a detection line algorithm that detects the tool electrode length and drilling depth of a workpiece accurately and effectively. Different boundaries of areas on the tool electrode are defined: a baseline between base and normal areas, a ND-line between normal and drilling areas (accumulating carbon area), and a DD-line between drilling area and dielectric fluid droplet on the electrode tip. Accordingly, image processing techniques are employed to extract a tool electrode image, and the centroid, eigenvector, and principle axis of the tool electrode are determined. The developed detection line algorithm (DLA) is then used to detect the baseline, ND-line, and DD-line along the direction of the principle axis. Finally, the tool electrode length and drilling depth of the workpiece are estimated via detected baseline, ND-line, and DD-line. Experimental results show good accuracy and efficiency in estimation of the tool electrode length and drilling depth under different conditions. Hence, this research may provide a reference for industrial application in EDM drilling measurement.

  8. Aberration correction during real time in vivo imaging of bone marrow with sensorless adaptive optics confocal microscope

    NASA Astrophysics Data System (ADS)

    Wang, Zhibin; Wei, Dan; Wei, Ling; He, Yi; Shi, Guohua; Wei, Xunbin; Zhang, Yudong

    2014-08-01

    We have demonstrated adaptive correction of specimen-induced aberration during in vivo imaging of mouse bone marrow vasculature with confocal fluorescence microscopy. Adaptive optics system was completed with wavefront sensorless correction scheme based on stochastic parallel gradient descent algorithm. Using image sharpness as the optimization metric, aberration correction was performed based upon Zernike polynomial modes. The experimental results revealed the improved signal and resolution leading to a substantially enhanced image contrast with aberration correction. The image quality of vessels at 38- and 75-μm depth increased three times and two times, respectively. The corrections allowed us to detect clearer bone marrow vasculature structures at greater contrast and improve the signal-to-noise ratio.

  9. A Multi-Scale Algorithm for Graffito Advertisement Detection from Images of Real Estate

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Zhu, Shi-Jiao

    There is a significant need to detect and extract the graffito advertisement embedded in the housing images automatically. However, it is a hard job to separate the advertisement region well since housing images generally have complex background. In this paper, a detecting algorithm which uses multi-scale Gabor filters to identify graffito regions is proposed. Firstly, multi-scale Gabor filters with different directions are applied to housing images, then the approach uses these frequency data to find likely graffito regions using the relationship of different channels, it exploits the ability of different filters technique to solve the detection problem with low computational efforts. Lastly, the method is tested on several real estate images which are embedded graffito advertisement to verify its robustness and efficiency. The experiments demonstrate graffito regions can be detected quite well.

  10. An Improved Topology-Potential-Based Community Detection Algorithm for Complex Network

    PubMed Central

    Wang, Zhixiao; Zhao, Ya; Chen, Zhaotong; Niu, Qiang

    2014-01-01

    Topology potential theory is a new community detection theory on complex network, which divides a network into communities by spreading outward from each local maximum potential node. At present, almost all topology-potential-based community detection methods ignore node difference and assume that all nodes have the same mass. This hypothesis leads to inaccuracy of topology potential calculation and then decreases the precision of community detection. Inspired by the idea of PageRank algorithm, this paper puts forward a novel mass calculation method for complex network nodes. A node's mass obtained by our method can effectively reflect its importance and influence in complex network. The more important the node is, the bigger its mass is. Simulation experiment results showed that, after taking node mass into consideration, the topology potential of node is more accurate, the distribution of topology potential is more reasonable, and the results of community detection are more precise. PMID:24600319

  11. Design of Cyber Attack Precursor Symptom Detection Algorithm through System Base Behavior Analysis and Memory Monitoring

    NASA Astrophysics Data System (ADS)

    Jung, Sungmo; Kim, Jong Hyun; Cagalaban, Giovanni; Lim, Ji-Hoon; Kim, Seoksoo

    More recently, botnet-based cyber attacks, including a spam mail or a DDos attack, have sharply increased, which poses a fatal threat to Internet services. At present, antivirus businesses make it top priority to detect malicious code in the shortest time possible (Lv.2), based on the graph showing a relation between spread of malicious code and time, which allows them to detect after malicious code occurs. Despite early detection, however, it is not possible to prevent malicious code from occurring. Thus, we have developed an algorithm that can detect precursor symptoms at Lv.1 to prevent a cyber attack using an evasion method of 'an executing environment aware attack' by analyzing system behaviors and monitoring memory.

  12. A New MANET wormhole detection algorithm based on traversal time and hop count analysis.

    PubMed

    Karlsson, Jonny; Dooley, Laurence S; Pulkkis, Göran

    2011-01-01

    As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET) are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA), which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability. PMID:22247657

  13. A New MANET Wormhole Detection Algorithm Based on Traversal Time and Hop Count Analysis

    PubMed Central

    Karlsson, Jonny; Dooley, Laurence S.; Pulkkis, Göran

    2011-01-01

    As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET) are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA), which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability. PMID:22247657

  14. Enhancing nuclear quadrupole resonance (NQR) signature detection leveraging interference suppression algorithms

    NASA Astrophysics Data System (ADS)

    DeBardelaben, James A.; Miller, Jeremy K.; Myrick, Wilbur L.; Miller, Joel B.; Gilbreath, G. Charmaine; Bajramaj, Blerta

    2012-06-01

    Nuclear quadrupole resonance (NQR) is a radio frequency (RF) magnetic spectroscopic technique that has been shown to detect and identify a wide range of explosive materials containing quadrupolar nuclei. The NQR response signal provides a unique signature of the material of interest. The signal is, however, very weak and can be masked by non-stationary RF interference (RFI) and thermal noise, limiting detection distance. In this paper, we investigate the bounds on the NQR detection range for ammonium nitrate. We leverage a low-cost RFI data acquisition system composed of inexpensive B-field sensing and commercial-off-the-shelf (COTS) software-defined radios (SDR). Using collected data as RFI reference signals, we apply adaptive filtering algorithms to mitigate RFI and enable NQR detection techniques to approach theoretical range bounds in tactical environments.

  15. Burst detection in district metering areas using a data driven clustering algorithm.

    PubMed

    Wu, Yipeng; Liu, Shuming; Wu, Xue; Liu, Youfei; Guan, Yisheng

    2016-09-01

    This paper describes a novel methodology for burst detection in a water distribution system. The proposed method has two stages. In the first stage, a clustering algorithm was employed for outlier detection, while the second stage identified the presence of bursts. An important feature of this method is that data analysis is carried out dependent on multiple flow meters whose measurements vary simultaneously in a district metering area (DMA). Moreover, the clustering-based method can automatically cope with non-stationary conditions in historical data; namely, the method has no prior data selection process. An example application of this method has been implemented to confirm that relatively large bursts (simulated by flushing) with short duration can be detected effectively. Noticeably, the method has a low false positive rate compared with previous studies and appearance of detected abnormal water usage consists with weather changes, showing great promise in real application to multi-inlet and multi-outlet DMAs. PMID:27176651

  16. Improvement for detection of microcalcifications through clustering algorithms and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Quintanilla-Domínguez, Joel; Ojeda-Magaña, Benjamín; Marcano-Cedeño, Alexis; Cortina-Januchs, María G.; Vega-Corona, Antonio; Andina, Diego

    2011-12-01

    A new method for detecting microcalcifications in regions of interest (ROIs) extracted from digitized mammograms is proposed. The top-hat transform is a technique based on mathematical morphology operations and, in this paper, is used to perform contrast enhancement of the mi-crocalcifications. To improve microcalcification detection, a novel image sub-segmentation approach based on the possibilistic fuzzy c-means algorithm is used. From the original ROIs, window-based features, such as the mean and standard deviation, were extracted; these features were used as an input vector in a classifier. The classifier is based on an artificial neural network to identify patterns belonging to microcalcifications and healthy tissue. Our results show that the proposed method is a good alternative for automatically detecting microcalcifications, because this stage is an important part of early breast cancer detection.

  17. How Small Can Impact Craters Be Detected at Large Scale by Automated Algorithms?

    NASA Astrophysics Data System (ADS)

    Bandeira, L.; Machado, M.; Pina, P.; Marques, J. S.

    2013-12-01

    The last decade has seen a widespread publication of crater detection algorithms (CDA) with increasing detection performances. The adaptive nature of some of the algorithms [1] has permitting their use in the construction or update of global catalogues for Mars and the Moon. Nevertheless, the smallest craters detected in these situations by CDA have 10 pixels in diameter (or about 2 km in MOC-WA images) [2] or can go down to 16 pixels or 200 m in HRSC imagery [3]. The availability of Martian images with metric (HRSC and CTX) and centimetric (HiRISE) resolutions is permitting to unveil craters not perceived before, thus automated approaches seem a natural way of detecting the myriad of these structures. In this study we present the efforts, based on our previous algorithms [2-3] and new training strategies, to push the automated detection of craters to a dimensional threshold as close as possible to the detail that can be perceived on the images, something that has not been addressed yet in a systematic way. The approach is based on the selection of candidate regions of the images (portions that contain crescent highlight and shadow shapes indicating a possible presence of a crater) using mathematical morphology operators (connected operators of different sizes) and on the extraction of texture features (Haar-like) and classification by Adaboost, into crater and non-crater. This is a supervised approach, meaning that a training phase, in which manually labelled samples are provided, is necessary so the classifier can learn what crater and non-crater structures are. The algorithm is intensively tested in Martian HiRISE images, from different locations on the planet, in order to cover the largest surface types from the geological point view (different ages and crater densities) and also from the imaging or textural perspective (different degrees of smoothness/roughness). The quality of the detections obtained is clearly dependent on the dimension of the craters

  18. DynPeak: An Algorithm for Pulse Detection and Frequency Analysis in Hormonal Time Series

    PubMed Central

    Vidal, Alexandre; Zhang, Qinghua; Médigue, Claire; Fabre, Stéphane; Clément, Frédérique

    2012-01-01

    The endocrine control of the reproductive function is often studied from the analysis of luteinizing hormone (LH) pulsatile secretion by the pituitary gland. Whereas measurements in the cavernous sinus cumulate anatomical and technical difficulties, LH levels can be easily assessed from jugular blood. However, plasma levels result from a convolution process due to clearance effects when LH enters the general circulation. Simultaneous measurements comparing LH levels in the cavernous sinus and jugular blood have revealed clear differences in the pulse shape, the amplitude and the baseline. Besides, experimental sampling occurs at a relatively low frequency (typically every 10 min) with respect to LH highest frequency release (one pulse per hour) and the resulting LH measurements are noised by both experimental and assay errors. As a result, the pattern of plasma LH may be not so clearly pulsatile. Yet, reliable information on the InterPulse Intervals (IPI) is a prerequisite to study precisely the steroid feedback exerted on the pituitary level. Hence, there is a real need for robust IPI detection algorithms. In this article, we present an algorithm for the monitoring of LH pulse frequency, basing ourselves both on the available endocrinological knowledge on LH pulse (shape and duration with respect to the frequency regime) and synthetic LH data generated by a simple model. We make use of synthetic data to make clear some basic notions underlying our algorithmic choices. We focus on explaining how the process of sampling affects drastically the original pattern of secretion, and especially the amplitude of the detectable pulses. We then describe the algorithm in details and perform it on different sets of both synthetic and experimental LH time series. We further comment on how to diagnose possible outliers from the series of IPIs which is the main output of the algorithm. PMID:22802933

  19. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  20. Development of an algorithm for automatic detection and rating of squeak and rattle events

    NASA Astrophysics Data System (ADS)

    Chandrika, Unnikrishnan Kuttan; Kim, Jay H.

    2010-10-01

    A new algorithm for automatic detection and rating of squeak and rattle (S&R) events was developed. The algorithm utilizes the perceived transient loudness (PTL) that approximates the human perception of a transient noise. At first, instantaneous specific loudness time histories are calculated over 1-24 bark range by applying the analytic wavelet transform and Zwicker loudness transform to the recorded noise. Transient specific loudness time histories are then obtained by removing estimated contributions of the background noise from instantaneous specific loudness time histories. These transient specific loudness time histories are summed to obtain the transient loudness time history. Finally, the PTL time history is obtained by applying Glasberg and Moore temporal integration to the transient loudness time history. Detection of S&R events utilizes the PTL time history obtained by summing only 18-24 barks components to take advantage of high signal-to-noise ratio in the high frequency range. A S&R event is identified when the value of the PTL time history exceeds the detection threshold pre-determined by a jury test. The maximum value of the PTL time history is used for rating of S&R events. Another jury test showed that the method performs much better if the PTL time history obtained by summing all frequency components is used. Therefore, r ating of S&R events utilizes this modified PTL time history. Two additional jury tests were conducted to validate the developed detection and rating methods. The algorithm developed in this work will enable automatic detection and rating of S&R events with good accuracy and minimum possibility of false alarm.

  1. A novel ROC approach for performance evaluation of target detection algorithms

    NASA Astrophysics Data System (ADS)

    Ganapathy, Priya; Skipper, Julie A.

    2007-04-01

    Receiver operator characteristic (ROC) analysis is an emerging automated target recognition system performance assessment tool. The ROC metric, area under the curve (AUC), is a universally accepted measure of classifying accuracy. In the presented approach, the detection algorithm output, i.e., a response plane (RP), must consist of grayscale values wherein a maximum value (e.g. 255) corresponds to highest probability of target locations. AUC computation involves the comparison of the RP and the ground truth to classify RP pixels as true positives (TP), true negatives (TN), false positives (FP), or false negatives (FN). Ideally, the background and all objects other than targets are TN. Historically, evaluation methods have excluded the background, and only a few spoof objects likely to be considered as a hit by detection algorithms were a priori demarcated as TN. This can potentially exaggerate the algorithm's performance. Here, a new ROC approach has been developed that divides the entire image into mutually exclusive target (TP) and background (TN) grid squares with adjustable size. Based on the overlap of the thresholded RP with the TP and TN grids, the FN and FP fractions are computed. Variation of the grid square size can bias the ROC results by artificially altering specificity, so an assessment of relative performance under a constant grid square size is adopted in our approach. A pilot study was performed to assess the method's ability to capture RP changes under three different detection algorithm parameter settings on ten images with different backgrounds and target orientations. An ANOVA-based comparison of the AUCs for the three settings showed a significant difference (p<0.001) at 95% confidence interval.

  2. Algorithm for detecting defects in wooden logs using ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Devaru, Dayakar; Halabe, Udaya B.; Gopalakrishnan, B.; Agrawal, Sachin; Grushecky, Shawn

    2005-11-01

    Presently there are no suitable non-invasive methods for precisely detecting the subsurface defects in logs in real time. Internal defects such as knots, decays, and embedded metals are of greatest concern for lumber production. While defects such as knots and decays (rots) are of major concern related to productivity and yield of high value wood products, embedded metals can damage the saw blade and significantly increase the down time and maintenance costs of saw mills. Currently, a large number of logs end up being discarded by saw mills, or result in low value wood products since they include defects. Nondestructive scanning of logs using techniques such as Ground Penetrating Radar (GPR) prior to sawing can greatly increase the productivity and yield of high value lumber. In this research, the GPR scanned data has been analyzed to differentiate the defective part of the wooden log from the good part. The location and size of the defect has been found in the GPR scanned data using the MATLAB algorithm. The output of this algorithm can be used as an input for generating operating instructions for a CNC sawing machine. This paper explains the advantages of the GPR technique, experimental setup and parameters used, data processing using RADAN software for detection of subsurface defects in logs, GPR data processing and analysis using MATLAB algorithm for automated defect detection, and comparison of results between the two processing methods. The results show that GPR in conjunction with the proposed algorithm provides a very promising technique for future on-line implementation in saw mills.

  3. Breadth-first search-based single-phase algorithms for bridge detection in wireless sensor networks.

    PubMed

    Akram, Vahid Khalilpour; Dagdeviren, Orhan

    2013-01-01

    Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930

  4. Breadth-First Search-Based Single-Phase Algorithms for Bridge Detection in Wireless Sensor Networks

    PubMed Central

    Akram, Vahid Khalilpour; Dagdeviren, Orhan

    2013-01-01

    Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930

  5. Radiation-induced chromosome aberrations in ataxia telangiectasia cells: high frequency of deletions and misrejoining detected by fluorescence in situ hybridization

    NASA Technical Reports Server (NTRS)

    Kawata, Tetsuya; Ito, Hisao; George, Kerry; Wu, Honglu; Uno, Takashi; Isobe, Kouichi; Cucinotta, Francis A.

    2003-01-01

    The mechanisms underlying the hyper-radiosensitivity of AT cells were investigated by analyzing chromosome aberrations in the G(2) and M phases of the cell cycle using a combination of chemically induced premature chromosome condensation (PCC) and fluorescence in situ hybridization (FISH) with chromosome painting probes. Confluent cultures of normal fibroblast cells (AG1522) and fibroblast cells derived from an individual with AT (GM02052) were exposed to gamma rays and allowed to repair at 37 degrees C for 24 h. At doses that resulted in 10% survival, GM02052 cells were approximately five times more sensitive to gamma rays than AG1522 cells. For a given dose, GM02052 cells contained a much higher frequency of deletions and misrejoining than AG1522 cells. For both cell types, a good correlation was found between the percentage of aberrant cells and cell survival. The average number of color junctions, which represent the frequency of chromosome misrejoining, was also found to correlate well with survival. However, in a similar surviving population of GM02052 and AG1522 cells, induced by 1 Gy and 6 Gy, respectively, AG1522 cells contained four times more color junctions and half as many deletions as GM02052 cells. These results indicate that both repair deficiency and misrepair may be involved in the hyper-radiosensitivity of AT cells.

  6. A novel seizure detection algorithm informed by hidden Markov model event states

    NASA Astrophysics Data System (ADS)

    Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian

    2016-06-01

    Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h‑1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.

  7. Motion mode recognition and step detection algorithms for mobile phone users.

    PubMed

    Susi, Melania; Renaudin, Valérie; Lachapelle, Gérard

    2013-01-01

    Microelectromechanical Systems (MEMS) technology is playing a key role in the design of the new generation of smartphones. Thanks to their reduced size, reduced power consumption, MEMS sensors can be embedded in above mobile devices for increasing their functionalities. However, MEMS cannot allow accurate autonomous location without external updates, e.g., from GPS signals, since their signals are degraded by various errors. When these sensors are fixed on the user's foot, the stance phases of the foot can easily be determined and periodic Zero velocity UPdaTes (ZUPTs) are performed to bound the position error. When the sensor is in the hand, the situation becomes much more complex. First of all, the hand motion can be decoupled from the general motion of the user. Second, the characteristics of the inertial signals can differ depending on the carrying modes. Therefore, algorithms for characterizing the gait cycle of a pedestrian using a handheld device have been developed. A classifier able to detect motion modes typical for mobile phone users has been designed and implemented. According to the detected motion mode, adaptive step detection algorithms are applied. Success of the step detection process is found to be higher than 97% in all motion modes. PMID:23348038

  8. A new parallel algorithm for contact detection in finite element methods

    SciTech Connect

    Hendrickson, B.; Plimpton, S.; Attaway, S.; Vaughan, C.; Gardner, D.

    1996-03-01

    In finite-element, transient dynamics simulations, physical objects are typically modeled as Lagrangian meshes because the meshes can move and deform with the objects as they undergo stress. In many simulations, such as computations of impacts or explosions, portions of the deforming mesh come in contact with each other as the simulation progresses. These contacts must be detected and the forces they impart to the mesh must be computed at each timestep to accurately capture the physics of interest. While the finite-element portion of these computations is readily parallelized, the contact detection problem is difficult to implement efficiently on parallel computers and has been a bottleneck to achieving high performance on large parallel machines. In this paper we describe a new parallel algorithm for detecting contacts. Our approach differs from previous work in that we use two different parallel decompositions, a static one for the finite element analysis and dynamic one for contact detection. We present results for this algorithm in a parallel version of the transient dynamics code PRONTO-3D running on a large Intel Paragon.

  9. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

    1988-01-01

    Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

  10. MeV Gamma Ray Detection Algorithms for Stacked Silicon Detectors

    NASA Technical Reports Server (NTRS)

    McMurray, Robert E. Jr.; Hubbard, G. Scott; Wercinski, Paul F.; Keller, Robert G.

    1993-01-01

    By making use of the signature of a gamma ray event as it appears in N = 5 to 20 lithium-drifted silicon detectors and applying smart selection algorithms, gamma rays in the energy range of 1 to 8 MeV can be detected with good efficiency and selectivity. Examples of the types of algorithms used for different energy regions include the simple sum mode, the sum-coincidence mode used in segmented detectors, unique variations on sum-coincidence for an N-dimensional vector event, and a new and extremely useful mode for double escape peak spectroscopy at pair-production energies. The latter algorithm yields a spectrum similar to that of the pair spectrometer, but without the need of the dual external segments for double escape coincidence, and without the large loss in efficiency of double escape events. Background events due to Compton scattering are largely suppressed. Monte Carlo calculations were used to model the gamma ray interactions in the silicon, in order to enable testing of a wide array of different algorithms on the event N-vectors for a large-N stack.

  11. Adaptive Fault Detection on Liquid Propulsion Systems with Virtual Sensors: Algorithms and Architectures

    NASA Technical Reports Server (NTRS)

    Matthews, Bryan L.; Srivastava, Ashok N.

    2010-01-01

    Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.

  12. An Angular Momentum Eddy Detection Algorithm (AMEDA) applied to coastal eddies

    NASA Astrophysics Data System (ADS)

    Le Vu, Briac; Stegner, Alexandre; Arsouze, Thomas

    2016-04-01

    We present a new automated eddy detection and tracking algorithm based on the computation of the LNAM (Local and Normalized Angular Momentum). This method is an improvement of the previous method by Mkhinini et al. (2014) with the aim to be applied to multiple datasets (satellite data, numerical models, laboratory experiments) using as few objective criteria as possible. First, we show the performance of the algorithm for three different source of data: a Mediterranean 1/8° AVISO geostrophic velocities fields based on the Absolute Dynamical Topography (ADT), a ROMS idealized simulation and a high resolution velocity field derived from PIV measurements in a rotating tank experiment. All the velocity fields describe the dynamical evolution of mesoscale eddies generated by the instability of coastal currents. Then, we compare the results of the AMEDA algorithm applied to regional 1/8° AVISO Mediterranean data set with in situ measurements (drifter, ARGO, ADCP…). This quantitative comparisons with few specific test cases enables us to estimate the accuracy of the method to quantify the eddies features: trajectory, size and intensity. We also use the AMEDA algorithm to identify the main formation areas of long-lived eddies in the Mediterranean Sea during the last 15 years.

  13. Full-scale engine demonstration of an advanced sensor failure detection, isolation and accommodation algorithm: Preliminary results

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    1987-01-01

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  14. Full-scale engine demonstration of an advanced sensor failure detection, isolation and accommodation algorithm: Preliminary results

    NASA Astrophysics Data System (ADS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  15. Full-scale engine demonstration of an advanced sensor failure detection isolation, and accommodation algorithm - Preliminary results

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    1987-01-01

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  16. A Universal Dynamic Threshold Cloud Detection Algorithm (UDTCDA) supported by a prior surface reflectance database

    NASA Astrophysics Data System (ADS)

    Sun, Lin; Wei, Jing; Wang, Jian; Mi, Xueting; Guo, Yamin; Lv, Yang; Yang, Yikun; Gan, Ping; Zhou, Xueying; Jia, Chen; Tian, Xinpeng

    2016-06-01

    Conventional cloud detection methods are easily affected by mixed pixels, complex surface structures, and atmospheric factors, resulting in poor cloud detection results. To minimize these problems, a new Universal Dynamic Threshold Cloud Detection Algorithm (UDTCDA) supported by a priori surface reflectance database is proposed in this paper. A monthly surface reflectance database is constructed using long-time-sequenced MODerate resolution Imaging Spectroradiometer surface reflectance product (MOD09A1) to provide the surface reflectance of the underlying surfaces. The relationships between the apparent reflectance changes and the surface reflectance are simulated under different observation and atmospheric conditions with the 6S (Second Simulation of the Satellite Signal in the Solar Spectrum) model, and the dynamic threshold cloud detection models are developed. Two typical remote sensing data with important application significance and different sensor parameters, MODIS and Landsat 8, are selected for cloud detection experiments. The results were validated against the visual interpretation of clouds and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation cloud measurements. The results showed that the UDTCDA can obtain a high precision in cloud detection, correctly identifying cloudy pixels and clear-sky pixels at rates greater than 80% with error rate and missing rate of less than 20%. The UDTCDA cloud product overall shows less estimation uncertainty than the current MODIS cloud mask products. Moreover, the UDTCDA can effectively reduce the effects of atmospheric factors and mixed pixels and can be applied to different satellite sensors to realize long-term, large-scale cloud detection operations.

  17. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.

    PubMed

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  18. Evaluation of a Pair-Wise Conflict Detection and Resolution Algorithm in a Multiple Aircraft Scenario

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.

    2002-01-01

    The KB3D algorithm is a pairwise conflict detection and resolution (CD&R) algorithm. It detects and generates trajectory vectoring for an aircraft which has been predicted to be in an airspace minima violation within a given look-ahead time. It has been proven, using mechanized theorem proving techniques, that for a pair of aircraft, KB3D produces at least one vectoring solution and that all solutions produced are correct. Although solutions produced by the algorithm are mathematically correct, they might not be physically executable by an aircraft or might not solve multiple aircraft conflicts. This paper describes a simple solution selection method which assesses all solutions generated by KB3D and determines the solution to be executed. The solution selection method and KB3D are evaluated using a simulation in which N aircraft fly in a free-flight environment and each aircraft in the simulation uses KB3D to maintain separation. Specifically, the solution selection method filters KB3D solutions which are procedurally undesirable or physically not executable and uses a predetermined criteria for selection.

  19. Adaptive switching detection algorithm for iterative-MIMO systems to enable power savings

    NASA Astrophysics Data System (ADS)

    Tadza, N.; Laurenson, D.; Thompson, J. S.

    2014-11-01

    This paper attempts to tackle one of the challenges faced in soft input soft output Multiple Input Multiple Output (MIMO) detection systems, which is to achieve optimal error rate performance with minimal power consumption. This is realized by proposing a new algorithm design that comprises multiple thresholds within the detector that, in real time, specify the receiver behavior according to the current channel in both slow and fast fading conditions, giving it adaptivity. This adaptivity enables energy savings within the system since the receiver chooses whether to accept or to reject the transmission, according to the success rate of detecting thresholds. The thresholds are calculated using the mutual information of the instantaneous channel conditions between the transmitting and receiving antennas of iterative-MIMO systems. In addition, the power saving technique, Dynamic Voltage and Frequency Scaling, helps to reduce the circuit power demands of the adaptive algorithm. This adaptivity has the potential to save up to 30% of the total energy when it is implemented on Xilinx®Virtex-5 simulation hardware. Results indicate the benefits of having this "intelligence" in the adaptive algorithm due to the promising performance-complexity tradeoff parameters in both software and hardware codesign simulation.

  20. Algorithm for lung cancer detection based on PET/CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Ishimatsu, Keita; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Ohtsuka, Hideki; Nishitani, Hiromu; Ohmatsu, Hironobu; Eguchi, Kenji; Kaneko, Masahiro; Moriyama, Noriyuki

    2009-02-01

    The five year survival rate of the lung cancer is low with about twenty-five percent. In addition it is an obstinate lung cancer wherein three out of four people die within five years. Then, the early stage detection and treatment of the lung cancer are important. Recently, we can obtain CT and PET image at the same time because PET/CT device has been developed. PET/CT is possible for a highly accurate cancer diagnosis because it analyzes quantitative shape information from CT image and FDG distribution from PET image. However, neither benign-malignant classification nor staging intended for lung cancer have been established still enough by using PET/CT images. In this study, we detect lung nodules based on internal organs extracted from CT image, and we also develop algorithm which classifies benignmalignant and metastatic or non metastatic lung cancer using lung structure and FDG distribution(one and two hour after administering FDG). We apply the algorithm to 59 PET/CT images (malignant 43 cases [Ad:31, Sq:9, sm:3], benign 16 cases) and show the effectiveness of this algorithm.

  1. Why have microsaccades become larger? Investigating eye deformations and detection algorithms.

    PubMed

    Nyström, Marcus; Hansen, Dan Witzner; Andersson, Richard; Hooge, Ignace

    2016-01-01

    The reported size of microsaccades is considerably larger today compared to the initial era of microsaccade studies during the 1950s and 1960s. We investigate whether this increase in size is related to the fact that the eye-trackers of today measure different ocular structures than the older techniques, and that the movements of these structures may differ during a microsaccade. In addition, we explore the impact such differences have on subsequent analyzes of the eye-tracker signals. In Experiment I, the movement of the pupil as well as the first and fourth Purkinje reflections were extracted from series of eye images recorded during a fixation task. Results show that the different ocular structures produce different microsaccade signatures. In Experiment II, we found that microsaccade amplitudes computed with a common detection algorithm were larger compared to those reported by two human experts. The main reason was that the overshoots were not systematically detected by the algorithm and therefore not accurately accounted for. We conclude that one reason to why the reported size of microsaccades has increased is due to the larger overshoots produced by the modern pupil-based eye-trackers compared to the systems used in the classical studies, in combination with the lack of a systematic algorithmic treatment of the overshoot. We hope that awareness of these discrepancies in microsaccade dynamics across eye structures will lead to more generally accepted definitions of microsaccades. PMID:25481631

  2. Loop closure detection by algorithmic information theory: implemented on range and camera image data.

    PubMed

    Ravari, Alireza Norouzzadeh; Taghirad, Hamid D

    2014-10-01

    In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods. PMID:24968363

  3. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    PubMed Central

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  4. Development and validation of a spike detection and classification algorithm aimed at implementation on hardware devices.

    PubMed

    Biffi, E; Ghezzi, D; Pedrocchi, A; Ferrigno, G

    2010-01-01

    Neurons cultured in vitro on MicroElectrode Array (MEA) devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a) statistical analysis on both simulated and real signal and (b) Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems. PMID:20300592

  5. High effective algorithm of the detection and identification of substance using the noisy reflected THz pulse

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Trofimov, Vladislav V.; Tikhomirov, Vasily V.

    2015-08-01

    Principal limitations of the standard THz-TDS method for the detection and identification are demonstrated under real conditions (at long distance of about 3.5 m and at a high relative humidity more than 50%) using neutral substances thick paper bag, paper napkins and chocolate. We show also that the THz-TDS method detects spectral features of dangerous substances even if the THz signals were measured in laboratory conditions (at distance 30-40 cm from the receiver and at a low relative humidity less than 2%); silicon-based semiconductors were used as the samples. However, the integral correlation criteria, based on SDA method, allows us to detect the absence of dangerous substances in the neutral substances. The discussed algorithm shows high probability of the substance identification and a reliability of realization in practice, especially for security applications and non-destructive testing.

  6. A combined STAP/DPCA algorithm for enhanced endoclutter target detection

    NASA Astrophysics Data System (ADS)

    Medl, Thomas

    2014-05-01

    Displaced Phase Center Antenna (DPCA) and Space-Time Adaptive Processing (STAP) are two general methods to cancel clutter in order to detect small, slowly moving targets that may be obscured by clutter. To detect these targets, the radar detection threshold needs to be as low as possible to ensure some minimum probability of detection (Pd). Unfortunately, lowering the radar threshold naturally results in a higher false alarm rate. Although there are standard methods such as M of N to reduce the false alarms, new techniques can potentially drive the false alarm rate down even further. Many "theoretical" papers have shown that STAP can be designed to outperform DPCA because of its potential additional "degrees-of-freedom". However, in "practice," this isn't always the case. For example, it is well known that STAP can have training issues in heterogeneous clutter. Typically, a radar signal processor will implement one method or the other to detect these small endoclutter targets. The technique being explored here is a two-fold approach in which the existing STAP code first processes the data in order to find a list of candidate targets. Next, a DPCA technique is also used to find a separate list of candidate detections from the same data. Although the algorithms are working on the same data, the processing is "independent" between them so the target lists are different. After both techniques have finished processing, the modified radar signal processing code "intelligently" combines the two detection lists into a single detection list. It will be shown that the combined list of detections from the two methods results in better detection performance than either method used separately.

  7. A real-time simulation evaluation of an advanced detection, isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  8. A real-time simulation evaluation of an advanced detection. Isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  9. An ensemble of k-nearest neighbours algorithm for detection of Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Gök, Murat

    2015-04-01

    Parkinson's disease is a disease of the central nervous system that leads to severe difficulties in motor functions. Developing computational tools for recognition of Parkinson's disease at the early stages is very desirable for alleviating the symptoms. In this paper, we developed a discriminative model based on a selected feature subset and applied several classifier algorithms in the context of disease detection. All classifier performances from the point of both stand-alone and rotation-forest ensemble approach were evaluated on a Parkinson's disease data-set according to a blind testing protocol. The new method compared to hitherto methods outperforms the state-of-the-art in terms of both predictions of accuracy (98.46%) and area under receiver operating characteristic curve (0.99) scores applying rotation-forest ensemble k-nearest neighbour classifier algorithm.

  10. Real-time low-energy fall detection algorithm with a programmable truncated MAC.

    PubMed

    de la Guia Solaz, Manuel; Bourke, Alan; Conway, Richard; Nelson, John; Olaighin, Gearoid

    2010-01-01

    The ability to discriminate between falls and activities of daily living (ADL) has been investigated by using tri-axial accelerometer sensors, mounted on the trunk and using simulated falls performed by young healthy subjects under supervised conditions and ADL performed by elderly subjects. In this paper we propose a power-aware real-time fall detection integrated circuit (IC) that can distinguish Falls from ADL by processing the accelerations measured during 240 falls and 240 ADL. In the proposed fixed point custom DSP architecture, a threshold algorithm was implemented to analyze the effectiveness of Programmable Truncated Multiplication regarding power reduction while maintaining a high output accuracy. The presented system runs a real time implementation of the algorithm on a low power architecture that allows up to 23% power savings through its digital blocks when compared to a standard implementation, without any accuracy loss. PMID:21095956

  11. Using the sequential regression (SER) algorithm for long-term signal processing. [Intrusion detection

    SciTech Connect

    Soldan, D. L.; Ahmed, N.; Stearns, S. D.

    1980-01-01

    The use of the sequential regression (SER) algorithm (Electron. Lett., 14, 118(1978); 13, 446(1977)) for long-term processing applications is limited by two problems that can occur when an SER predictor has more weights than required to predict the input signal. First, computational difficulties related to updating the autocorrelation matrix inverse could arise, since no unique least-squares solution exists. Second, the predictor strives to remove very low-level components in the input, and hence could implement a gain function that is essentially zero over the entire passband. The predictor would then tend to become a no-pass filter which is undesirable in certain applications, e.g., intrusion detection (SAND--78-1032). Modifications to the SER algorithm that overcome the above problems are presented, which enable its use for long-term signal processing applications. 3 figures.

  12. A novel algorithm for detecting protein complexes with the breadth first search.

    PubMed

    Tang, Xiwei; Wang, Jianxin; Li, Min; He, Yiming; Pan, Yi

    2014-01-01

    Most biological processes are carried out by protein complexes. A substantial number of false positives of the protein-protein interaction (PPI) data can compromise the utility of the datasets for complexes reconstruction. In order to reduce the impact of such discrepancies, a number of data integration and affinity scoring schemes have been devised. The methods encode the reliabilities (confidence) of physical interactions between pairs of proteins. The challenge now is to identify novel and meaningful protein complexes from the weighted PPI network. To address this problem, a novel protein complex mining algorithm ClusterBFS (Cluster with Breadth-First Search) is proposed. Based on the weighted density, ClusterBFS detects protein complexes of the weighted network by the breadth first search algorithm, which originates from a given seed protein used as starting-point. The experimental results show that ClusterBFS performs significantly better than the other computational approaches in terms of the identification of protein complexes. PMID:24818139

  13. An algorithm for the detection of extreme mass ratio inspirals in LISA data

    NASA Astrophysics Data System (ADS)

    Babak, Stanislav; Gair, Jonathan R.; Porter, Edward K.

    2009-07-01

    The gravitational wave signal from a compact object inspiralling into a massive black hole (MBH) is considered to be one of the most difficult sources to detect in the LISA data stream. Due to the large parameter space of possible signals and many orbital cycles spent in the sensitivity band of LISA, it has been estimated that ~1035 templates would be required to carry out a fully coherent search using a template grid, which is computationally impossible. Here we describe an algorithm based on a constrained Metropolis-Hastings stochastic search which allows us to find and accurately estimate parameters of isolated EMRI signals buried in Gaussian instrumental noise. We illustrate the effectiveness of the algorithm with results from searches of the Mock LISA Data Challenge round 1B data sets.

  14. EFFICIENT ALGORITHMS FOR THE OPTIMAL-RATIO REGION DETECTION PROBLEMS IN DISCRETE GEOMETRY WITH APPLICATIONS.

    PubMed

    Wu, Xiaodong

    2006-01-01

    In this paper, we study several interesting optimal-ratio region detection (ORD) problems in d-D (d ≥ 3) discrete geometric spaces, which arise in high dimensional medical image segmentation. Given a d-D voxel grid of n cells, two classes of geometric regions that are enclosed by a single or two coupled smooth heighfield surfaces defined on the entire grid domain are considered. The objective functions are normalized by a function of the desired regions, which avoids a bias to produce an overly large or small region resulting from data noise. The normalization functions that we employ are used in real medical image segmentation. To our best knowledge, no previous results on these problems in high dimensions are known. We develop a unified algorithmic framework based on a careful characterization of the intrinsic geometric structures and a nontrivial graph transformation scheme, yielding efficient polynomial time algorithms for solving these ORD problems. Our main ideas include the following. We observe that the optimal solution to the ORD problems can be obtained via the construction of a convex hull for a set of O(n) unknown 2-D points using the hand probing technique. The probing oracles are implemented by computing a minimum s-t cut in a weighted directed graph. The ORD problems are then solved by O(n) calls to the minimum s-t cut algorithm. For the class of regions bounded by a single heighfield surface, our further investigation shows that the O(n) calls to the minimum s-t cut algorithm are on a monotone parametric flow network, which enables to detect the optimal-ratio region in the complexity of computing a single maximum flow. PMID:25414538

  15. Analysis of BWR OPRM plant data and detection algorithms with DSSPP

    SciTech Connect

    Yang, J.; Vedovi, J.; Chung, A. K.; Zino, J. F.

    2012-07-01

    All U.S. BWRs are required to have licensed stability solutions that satisfy General Design Criteria (GDC) 10 and 12 of 10 CFR 50 Appendix A. Implemented solutions are either detect and suppress or preventive in nature. Detection and suppression of power oscillations is accomplished by specialized hardware and software such as the Oscillation Power Range Monitor (OPRM) utilized in Option III and Detect and Suppress Solution - Confirmation Density (DSS-CD) stability Long-Term Solutions (LTSs). The detection algorithms are designed to recognize a Thermal-Hydraulic Instability (THI) event and initiate control rod insertion before the power oscillations increase much higher above the noise level that may threaten the fuel integrity. Option III is the most widely used long-term stability solution in the US and has more than 200 reactor years of operational history. DSS-CD represents an evolutionary step from the stability LTS Option III and its licensed domain envelopes the Maximum Extended Load Line Limit Analysis Plus (MELLLA +) domain. In order to enhance the capability to investigate the sensitivity of key parameters of stability detection algorithms, GEH has developed a new engineering analysis code, namely DSSPP (Detect and Suppress Solution Post Processor), which is introduced in this paper. The DSSPP analysis tool represents a major advancement in the method for diagnosing the design of stability detection algorithms that enables designers to perform parametric studies of the key parameters relevant for THI events and to fine tune these system parameters such that a potential spurious scram might be avoided. Demonstrations of DSSPPs application are also presented in this paper utilizing actual plant THI data. A BWR/6 plant had a plant transient that included unplanned recirculation pump transfer from fast to slow speed resulting in about 100% to {approx}40% rated power decrease and about 99% to {approx}30% rated core flow decrease. As the feedwater temperature

  16. Mapping of Planetary Surface Age Based on Crater Statistics Obtained by AN Automatic Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Salih, A. L.; Mühlbauer, M.; Grumpe, A.; Pasckert, J. H.; Wöhler, C.; Hiesinger, H.

    2016-06-01

    The analysis of the impact crater size-frequency distribution (CSFD) is a well-established approach to the determination of the age of planetary surfaces. Classically, estimation of the CSFD is achieved by manual crater counting and size determination in spacecraft images, which, however, becomes very time-consuming for large surface areas and/or high image resolution. With increasing availability of high-resolution (nearly) global image mosaics of planetary surfaces, a variety of automated methods for the detection of craters based on image data and/or topographic data have been developed. In this contribution a template-based crater detection algorithm is used which analyses image data acquired under known illumination conditions. Its results are used to establish the CSFD for the examined area, which is then used to estimate the absolute model age of the surface. The detection threshold of the automatic crater detection algorithm is calibrated based on a region with available manually determined CSFD such that the age inferred from the manual crater counts corresponds to the age inferred from the automatic crater detection results. With this detection threshold, the automatic crater detection algorithm can be applied to a much larger surface region around the calibration area. The proposed age estimation method is demonstrated for a Kaguya Terrain Camera image mosaic of 7.4 m per pixel resolution of the floor region of the lunar crater Tsiolkovsky, which consists of dark and flat mare basalt and has an area of nearly 10,000 km2. The region used for calibration, for which manual crater counts are available, has an area of 100 km2. In order to obtain a spatially resolved age map, CSFDs and surface ages are computed for overlapping quadratic regions of about 4.4 x 4.4 km2 size offset by a step width of 74 m. Our constructed surface age map of the floor of Tsiolkovsky shows age values of typically 3.2-3.3 Ga, while for small regions lower (down to 2.9 Ga) and higher

  17. A stereo-vision hazard-detection algorithm to increase planetary lander autonomy

    NASA Astrophysics Data System (ADS)

    Woicke, Svenja; Mooij, Erwin

    2016-05-01

    For future landings on any celestial body, increasing the lander autonomy as well as decreasing risk are primary objectives. Both risk reduction and an increase in autonomy can be achieved by including hazard detection and avoidance in the guidance, navigation, and control loop. One of the main challenges in hazard detection and avoidance is the reconstruction of accurate elevation models, as well as slope and roughness maps. Multiple methods for acquiring the inputs for hazard maps are available. The main distinction can be made between active and passive methods. Passive methods (cameras) have budgetary advantages compared to active sensors (radar, light detection and ranging). However, it is necessary to proof that these methods deliver sufficiently good maps. Therefore, this paper discusses hazard detection using stereo vision. To facilitate a successful landing not more than 1% wrong detections (hazards that are not identified) are allowed. Based on a sensitivity analysis it was found that using a stereo set-up at a baseline of ≤ 2 m is feasible at altitudes of ≤ 200 m defining false positives of less than 1%. It was thus shown that stereo-based hazard detection is an effective means to decrease the landing risk and increase the lander autonomy. In conclusion, the proposed algorithm is a promising candidate for future landers.

  18. The development of line-scan image recognition algorithms for the detection of frass on mature tomatoes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this research, a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at two wavebands, 664 nm and 690 nm, for co...

  19. Orthogonal sensor suite and the signal-processing algorithm for human detection and discrimination

    NASA Astrophysics Data System (ADS)

    Ekimov, Alexander; Sabatier, James M.

    2009-05-01

    The focus of this paper is a review of methods and algorithms for human motion detection in the presence of nonstationary environmental background noise. Human footstep forces on the ground/floor generate periodic broadband seismic and sound signals envelopes with two characteristic times, T1 (the footstep repetition time, which is equal to the time of the whole body periodic vibrations) and T2 (the footstep duration time, which is equal to the time interval for a single footstep from "heel strike" to "toe slap and weight transfer"). Human body motions due to walking are periodic movements of a multiple-degrees-of-freedom mechanical system with a specific cadence frequency equal to 1/T1. For a walking human, the cadence frequencies for the appendages are the same and lie below 3 Hz. Simultaneously collecting footstep seismic, ultrasonic, and Doppler signals of human motion enhance the capability to detect humans in quiet and noisy environments. The common denominator of in the use of these orthogonal sensors (seismic, ultrasonic, Doppler) is a signal-processing algorithm package that allows detection of human-specific time-frequency signatures and discriminates them using a distinct cadence frequency from signals produced by other moving and stationary objects (e.g. vehicular and animal signatures). It has been experimentally shown that human cadence frequencies for seismic, passive ultrasonic, and Doppler motion signatures are equivalent and temporally stable.

  20. Detection and clustering of features in aerial images by neuron network-based algorithm

    NASA Astrophysics Data System (ADS)

    Vozenilek, Vit

    2015-12-01

    The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.