NASA Astrophysics Data System (ADS)
Zhu, Zhe
2017-08-01
The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.
Detection and Tracking of Moving Objects with Real-Time Onboard Vision System
NASA Astrophysics Data System (ADS)
Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.
2017-05-01
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
DALMATIAN: An Algorithm for Automatic Cell Detection and Counting in 3D.
Shuvaev, Sergey A; Lazutkin, Alexander A; Kedrov, Alexander V; Anokhin, Konstantin V; Enikolopov, Grigori N; Koulakov, Alexei A
2017-01-01
Current 3D imaging methods, including optical projection tomography, light-sheet microscopy, block-face imaging, and serial two photon tomography enable visualization of large samples of biological tissue. Large volumes of data obtained at high resolution require development of automatic image processing techniques, such as algorithms for automatic cell detection or, more generally, point-like object detection. Current approaches to automated cell detection suffer from difficulties originating from detection of particular cell types, cell populations of different brightness, non-uniformly stained, and overlapping cells. In this study, we present a set of algorithms for robust automatic cell detection in 3D. Our algorithms are suitable for, but not limited to, whole brain regions and individual brain sections. We used watershed procedure to split regional maxima representing overlapping cells. We developed a bootstrap Gaussian fit procedure to evaluate the statistical significance of detected cells. We compared cell detection quality of our algorithm and other software using 42 samples, representing 6 staining and imaging techniques. The results provided by our algorithm matched manual expert quantification with signal-to-noise dependent confidence, including samples with cells of different brightness, non-uniformly stained, and overlapping cells for whole brain regions and individual tissue sections. Our algorithm provided the best cell detection quality among tested free and commercial software.
A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Krauthammer, Prof. Michael
2010-01-01
There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manuallymore » labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use.« less
Robust crop and weed segmentation under uncontrolled outdoor illumination.
Jeon, Hong Y; Tian, Lei F; Zhu, Heping
2011-01-01
An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA).
Text Extraction from Scene Images by Character Appearance and Structure Modeling
Yi, Chucai; Tian, Yingli
2012-01-01
In this paper, we propose a novel algorithm to detect text information from natural scene images. Scene text classification and detection are still open research topics. Our proposed algorithm is able to model both character appearance and structure to generate representative and discriminative text descriptors. The contributions of this paper include three aspects: 1) a new character appearance model by a structure correlation algorithm which extracts discriminative appearance features from detected interest points of character samples; 2) a new text descriptor based on structons and correlatons, which model character structure by structure differences among character samples and structure component co-occurrence; and 3) a new text region localization method by combining color decomposition, character contour refinement, and string line alignment to localize character candidates and refine detected text regions. We perform three groups of experiments to evaluate the effectiveness of our proposed algorithm, including text classification, text detection, and character identification. The evaluation results on benchmark datasets demonstrate that our algorithm achieves the state-of-the-art performance on scene text classification and detection, and significantly outperforms the existing algorithms for character identification. PMID:23316111
A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images
Xu, Songhua; Krauthammer, Michael
2010-01-01
There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper’s key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. In this paper, we demonstrate that a projection histogram-based text detection approach is well suited for text detection in biomedical images, with a performance of F score of .60. The approach performs better than comparable approaches for text detection. Further, we show that the iterative application of the algorithm is boosting overall detection performance. A C++ implementation of our algorithm is freely available through email request for academic use. PMID:20887803
A new pivoting and iterative text detection algorithm for biomedical images.
Xu, Songhua; Krauthammer, Michael
2010-12-01
There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use. Copyright © 2010 Elsevier Inc. All rights reserved.
Robust Crop and Weed Segmentation under Uncontrolled Outdoor Illumination
Jeon, Hong Y.; Tian, Lei F.; Zhu, Heping
2011-01-01
An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA). PMID:22163954
Nonstationary EO/IR Clutter Suppression and Dim Object Tracking
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Brown, A.; Brown, J.
2010-09-01
We develop and evaluate the performance of advanced algorithms which provide significantly improved capabilities for automated detection and tracking of ballistic and flying dim objects in the presence of highly structured intense clutter. Applications include ballistic missile early warning, midcourse tracking, trajectory prediction, and resident space object detection and tracking. The set of algorithms include, in particular, adaptive spatiotemporal clutter estimation-suppression and nonlinear filtering-based multiple-object track-before-detect. These algorithms are suitable for integration into geostationary, highly elliptical, or low earth orbit scanning or staring sensor suites, and are based on data-driven processing that adapts to real-world clutter backgrounds, including celestial, earth limb, or terrestrial clutter. In many scenarios of interest, e.g., for highly elliptic and, especially, low earth orbits, the resulting clutter is highly nonstationary, providing a significant challenge for clutter suppression to or below sensor noise levels, which is essential for dim object detection and tracking. We demonstrate the success of the developed algorithms using semi-synthetic and real data. In particular, our algorithms are shown to be capable of detecting and tracking point objects with signal-to-clutter levels down to 1/1000 and signal-to-noise levels down to 1/4.
Automated detection of hospital outbreaks: A systematic review of methods.
Leclère, Brice; Buckeridge, David L; Boëlle, Pierre-Yves; Astagneau, Pascal; Lepelletier, Didier
2017-01-01
Several automated algorithms for epidemiological surveillance in hospitals have been proposed. However, the usefulness of these methods to detect nosocomial outbreaks remains unclear. The goal of this review was to describe outbreak detection algorithms that have been tested within hospitals, consider how they were evaluated, and synthesize their results. We developed a search query using keywords associated with hospital outbreak detection and searched the MEDLINE database. To ensure the highest sensitivity, no limitations were initially imposed on publication languages and dates, although we subsequently excluded studies published before 2000. Every study that described a method to detect outbreaks within hospitals was included, without any exclusion based on study design. Additional studies were identified through citations in retrieved studies. Twenty-nine studies were included. The detection algorithms were grouped into 5 categories: simple thresholds (n = 6), statistical process control (n = 12), scan statistics (n = 6), traditional statistical models (n = 6), and data mining methods (n = 4). The evaluation of the algorithms was often solely descriptive (n = 15), but more complex epidemiological criteria were also investigated (n = 10). The performance measures varied widely between studies: e.g., the sensitivity of an algorithm in a real world setting could vary between 17 and 100%. Even if outbreak detection algorithms are useful complementary tools for traditional surveillance, the heterogeneity in results among published studies does not support quantitative synthesis of their performance. A standardized framework should be followed when evaluating outbreak detection methods to allow comparison of algorithms across studies and synthesis of results.
Advanced detection, isolation and accommodation of sensor failures: Real-time evaluation
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Bruton, William M.
1987-01-01
The objective of the Advanced Detection, Isolation, and Accommodation (ADIA) Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines by using analytical redundacy to detect sensor failures. The results of a real time hybrid computer evaluation of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 engine control system are determined. Also included are details about the microprocessor implementation of the algorithm as well as a description of the algorithm itself.
Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique
NASA Astrophysics Data System (ADS)
Kalinovsky, A.; Liauchuk, V.; Tarasau, A.
2017-05-01
In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.
Radar Detection of Marine Mammals
2010-09-30
associative tracker using the Munkres algorithm was used. This was then expanded to include a track - before - detect algorithm, the Baysean Field...small, slow moving objects (i.e. whales). In order to address the third concern (M2 mode), we have tested using a track - before - detect tracker termed
Clustering analysis of moving target signatures
NASA Astrophysics Data System (ADS)
Martone, Anthony; Ranney, Kenneth; Innocenti, Roberto
2010-04-01
Previously, we developed a moving target indication (MTI) processing approach to detect and track slow-moving targets inside buildings, which successfully detected moving targets (MTs) from data collected by a low-frequency, ultra-wideband radar. Our MTI algorithms include change detection, automatic target detection (ATD), clustering, and tracking. The MTI algorithms can be implemented in a real-time or near-real-time system; however, a person-in-the-loop is needed to select input parameters for the clustering algorithm. Specifically, the number of clusters to input into the cluster algorithm is unknown and requires manual selection. A critical need exists to automate all aspects of the MTI processing formulation. In this paper, we investigate two techniques that automatically determine the number of clusters: the adaptive knee-point (KP) algorithm and the recursive pixel finding (RPF) algorithm. The KP algorithm is based on a well-known heuristic approach for determining the number of clusters. The RPF algorithm is analogous to the image processing, pixel labeling procedure. Both algorithms are used to analyze the false alarm and detection rates of three operational scenarios of personnel walking inside wood and cinderblock buildings.
Automated detection of hospital outbreaks: A systematic review of methods
Buckeridge, David L.; Lepelletier, Didier
2017-01-01
Objectives Several automated algorithms for epidemiological surveillance in hospitals have been proposed. However, the usefulness of these methods to detect nosocomial outbreaks remains unclear. The goal of this review was to describe outbreak detection algorithms that have been tested within hospitals, consider how they were evaluated, and synthesize their results. Methods We developed a search query using keywords associated with hospital outbreak detection and searched the MEDLINE database. To ensure the highest sensitivity, no limitations were initially imposed on publication languages and dates, although we subsequently excluded studies published before 2000. Every study that described a method to detect outbreaks within hospitals was included, without any exclusion based on study design. Additional studies were identified through citations in retrieved studies. Results Twenty-nine studies were included. The detection algorithms were grouped into 5 categories: simple thresholds (n = 6), statistical process control (n = 12), scan statistics (n = 6), traditional statistical models (n = 6), and data mining methods (n = 4). The evaluation of the algorithms was often solely descriptive (n = 15), but more complex epidemiological criteria were also investigated (n = 10). The performance measures varied widely between studies: e.g., the sensitivity of an algorithm in a real world setting could vary between 17 and 100%. Conclusion Even if outbreak detection algorithms are useful complementary tools for traditional surveillance, the heterogeneity in results among published studies does not support quantitative synthesis of their performance. A standardized framework should be followed when evaluating outbreak detection methods to allow comparison of algorithms across studies and synthesis of results. PMID:28441422
Rommens, Nicole; Geertsema, Evelien; Jansen Holleboom, Lisanne; Cox, Fieke; Visser, Gerhard
2018-05-11
User safety and the quality of diagnostics on the epilepsy monitoring unit (EMU) depend on reaction to seizures. Online seizure detection might improve this. While good sensitivity and specificity is reported, the added value above staff response is unclear. We ascertained the added value of two electroencephalograph (EEG) seizure detection algorithms in terms of additional detected seizures or faster detection time. EEG-video seizure recordings of people admitted to an EMU over one year were included, with a maximum of two seizures per subject. All recordings were retrospectively analyzed using Encevis EpiScan and BESA Epilepsy. Detection sensitivity and latency of the algorithms were compared to staff responses. False positive rates were estimated on 30 uninterrupted recordings (roughly 24 h per subject) of consecutive subjects admitted to the EMU. EEG-video recordings used included 188 seizures. The response rate of staff was 67%, of Encevis 67%, and of BESA Epilepsy 65%. Of the 62 seizures missed by staff, 66% were recognized by Encevis and 39% by BESA Epilepsy. The median latency was 31 s (staff), 10 s (Encevis), and 14 s (BESA Epilepsy). After correcting for walking time from the observation room to the subject, both algorithms detected faster than staff in 65% of detected seizures. The full recordings included 617 h of EEG. Encevis had a median false positive rate of 4.9 per 24 h and BESA Epilepsy of 2.1 per 24 h. EEG-video seizure detection algorithms may improve reaction to seizures by improving the total number of seizures detected and the speed of detection. The false positive rate is feasible for use in a clinical situation. Implementation of these algorithms might result in faster diagnostic testing and better observation during seizures. Copyright © 2018. Published by Elsevier Inc.
Detection of person borne IEDs using multiple cooperative sensors
NASA Astrophysics Data System (ADS)
MacIntosh, Scott; Deming, Ross; Hansen, Thorkild; Kishan, Neel; Tang, Ling; Shea, Jing; Lang, Stephen
2011-06-01
The use of multiple cooperative sensors for the detection of person borne IEDs is investigated. The purpose of the effort is to evaluate the performance benefits of adding multiple sensor data streams into an aided threat detection algorithm, and a quantitative analysis of which sensor data combinations improve overall detection performance. Testing includes both mannequins and human subjects with simulated suicide bomb devices of various configurations, materials, sizes and metal content. Aided threat recognition algorithms are being developed to test detection performance of individual sensors against combined fused sensors inputs. Sensors investigated include active and passive millimeter wave imaging systems, passive infrared, 3-D profiling sensors and acoustic imaging. The paper describes the experimental set-up and outlines the methodology behind a decision fusion algorithm-based on the concept of a "body model".
Algorithmic detectability threshold of the stochastic block model
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity
NASA Astrophysics Data System (ADS)
Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin
2017-07-01
Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.
Automatic detection of ECG cable interchange by analyzing both morphology and interlead relations.
Han, Chengzong; Gregg, Richard E; Feild, Dirk Q; Babaeizadeh, Saeed
2014-01-01
ECG cable interchange can generate erroneous diagnoses. For algorithms detecting ECG cable interchange, high specificity is required to maintain a low total false positive rate because the prevalence of interchange is low. In this study, we propose and evaluate an improved algorithm for automatic detection and classification of ECG cable interchange. The algorithm was developed by using both ECG morphology information and redundancy information. ECG morphology features included QRS-T and P-wave amplitude, frontal axis and clockwise vector loop rotation. The redundancy features were derived based on the EASI™ lead system transformation. The classification was implemented using linear support vector machine. The development database came from multiple sources including both normal subjects and cardiac patients. An independent database was used to test the algorithm performance. Common cable interchanges were simulated by swapping either limb cables or precordial cables. For the whole validation database, the overall sensitivity and specificity for detecting precordial cable interchange were 56.5% and 99.9%, and the sensitivity and specificity for detecting limb cable interchange (excluding left arm-left leg interchange) were 93.8% and 99.9%. Defining precordial cable interchange or limb cable interchange as a single positive event, the total false positive rate was 0.7%. When the algorithm was designed for higher sensitivity, the sensitivity for detecting precordial cable interchange increased to 74.6% and the total false positive rate increased to 2.7%, while the sensitivity for detecting limb cable interchange was maintained at 93.8%. The low total false positive rate was maintained at 0.6% for the more abnormal subset of the validation database including only hypertrophy and infarction patients. The proposed algorithm can detect and classify ECG cable interchanges with high specificity and low total false positive rate, at the cost of decreased sensitivity for certain precordial cable interchanges. The algorithm could also be configured for higher sensitivity for different applications where a lower specificity can be tolerated. Copyright © 2014 Elsevier Inc. All rights reserved.
System and method for resolving gamma-ray spectra
Gentile, Charles A.; Perry, Jason; Langish, Stephen W.; Silber, Kenneth; Davis, William M.; Mastrovito, Dana
2010-05-04
A system for identifying radionuclide emissions is described. The system includes at least one processor for processing output signals from a radionuclide detecting device, at least one training algorithm run by the at least one processor for analyzing data derived from at least one set of known sample data from the output signals, at least one classification algorithm derived from the training algorithm for classifying unknown sample data, wherein the at least one training algorithm analyzes the at least one sample data set to derive at least one rule used by said classification algorithm for identifying at least one radionuclide emission detected by the detecting device.
Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model.
Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai
2017-02-08
Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences.
Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model
Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai
2017-01-01
Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences. PMID:28208694
Turbofan engine demonstration of sensor failure detection
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Abdelwahab, Mahmood
1991-01-01
In the paper, the results of a full-scale engine demonstration of a sensor failure detection algorithm are presented. The algorithm detects, isolates, and accommodates sensor failures using analytical redundancy. The experimental hardware, including the F100 engine, is described. Demonstration results were obtained over a large portion of a typical flight envelope for the F100 engine. They include both subsonic and supersonic conditions at both medium and full, nonafter burning, power. Estimated accuracy, minimum detectable levels of sensor failures, and failure accommodation performance for an F100 turbofan engine control system are discussed.
Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei
2011-04-01
An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.
Advances in algorithm fusion for automated sea mine detection and classification
NASA Astrophysics Data System (ADS)
Dobeck, Gerald J.; Cobb, J. Tory
2002-11-01
Along with other sensors, the Navy uses high-resolution sonar to detect and classify sea mines in mine-hunting operations. Scientists and engineers have devoted substantial effort to the development of automated detection and classification (D/C) algorithms for these high-resolution systems. Several factors spurred these efforts, including: (1) aids for operators to reduce work overload; (2) more optimal use of all available data; and (3) the introduction of unmanned minehunting systems. The environments where sea mines are typically laid (harbor areas, shipping lanes, and the littorals) give rise to many false alarms caused by natural, biologic, and manmade clutter. The objective of the automated D/C algorithms is to eliminate most of these false alarms while maintaining a very high probability of mine detection and classification (PdPc). In recent years, the benefits of fusing the outputs of multiple D/C algorithms (Algorithm Fusion) have been studied. To date, the results have been remarkable, including reliable robustness to new environments. In this paper a brief history of existing Algorithm Fusion technology and some techniques recently used to improve performance are presented. An exploration of new developments is presented in conclusion.
Robust automatic line scratch detection in films.
Newson, Alasdair; Almansa, Andrés; Gousseau, Yann; Pérez, Patrick
2014-03-01
Line scratch detection in old films is a particularly challenging problem due to the variable spatiotemporal characteristics of this defect. Some of the main problems include sensitivity to noise and texture, and false detections due to thin vertical structures belonging to the scene. We propose a robust and automatic algorithm for frame-by-frame line scratch detection in old films, as well as a temporal algorithm for the filtering of false detections. In the frame-by-frame algorithm, we relax some of the hypotheses used in previous algorithms in order to detect a wider variety of scratches. This step's robustness and lack of external parameters is ensured by the combined use of an a contrario methodology and local statistical estimation. In this manner, over-detection in textured or cluttered areas is greatly reduced. The temporal filtering algorithm eliminates false detections due to thin vertical structures by exploiting the coherence of their motion with that of the underlying scene. Experiments demonstrate the ability of the resulting detection procedure to deal with difficult situations, in particular in the presence of noise, texture, and slanted or partial scratches. Comparisons show significant advantages over previous work.
Biological network motif detection and evaluation
2011-01-01
Background Molecular level of biological data can be constructed into system level of data as biological networks. Network motifs are defined as over-represented small connected subgraphs in networks and they have been used for many biological applications. Since network motif discovery involves computationally challenging processes, previous algorithms have focused on computational efficiency. However, we believe that the biological quality of network motifs is also very important. Results We define biological network motifs as biologically significant subgraphs and traditional network motifs are differentiated as structural network motifs in this paper. We develop five algorithms, namely, EDGEGO-BNM, EDGEBETWEENNESS-BNM, NMF-BNM, NMFGO-BNM and VOLTAGE-BNM, for efficient detection of biological network motifs, and introduce several evaluation measures including motifs included in complex, motifs included in functional module and GO term clustering score in this paper. Experimental results show that EDGEGO-BNM and EDGEBETWEENNESS-BNM perform better than existing algorithms and all of our algorithms are applicable to find structural network motifs as well. Conclusion We provide new approaches to finding network motifs in biological networks. Our algorithms efficiently detect biological network motifs and further improve existing algorithms to find high quality structural network motifs, which would be impossible using existing algorithms. The performances of the algorithms are compared based on our new evaluation measures in biological contexts. We believe that our work gives some guidelines of network motifs research for the biological networks. PMID:22784624
Algorithms for the detection of chewing behavior in dietary monitoring applications
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Helal, Abdelsalam; Mendez-Vasquez, Andres
2009-08-01
The detection of food consumption is key to the implementation of successful behavior modification in support of dietary monitoring and therapy, for example, during the course of controlling obesity, diabetes, or cardiovascular disease. Since the vast majority of humans consume food via mastication (chewing), we have designed an algorithm that automatically detects chewing behaviors in surveillance video of a person eating. Our algorithm first detects the mouth region, then computes the spatiotemporal frequency spectrum of a small perioral region (including the mouth). Spectral data are analyzed to determine the presence of periodic motion that characterizes chewing. A classifier is then applied to discriminate different types of chewing behaviors. Our algorithm was tested on seven volunteers, whose behaviors included chewing with mouth open, chewing with mouth closed, talking, static face presentation (control case), and moving face presentation. Early test results show that the chewing behaviors induce a temporal frequency peak at 0.5Hz to 2.5Hz, which is readily detected using a distance-based classifier. Computational cost is analyzed for implementation on embedded processing nodes, for example, in a healthcare sensor network. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm, and its estimated error. It is shown that chewing detection is possible within a computationally efficient, accurate, and subject-independent framework.
Optimization of Contrast Detection Power with Probabilistic Behavioral Information
Cordes, Dietmar; Herzmann, Grit; Nandy, Rajesh; Curran, Tim
2012-01-01
Recent progress in the experimental design for event-related fMRI experiments made it possible to find the optimal stimulus sequence for maximum contrast detection power using a genetic algorithm. In this study, a novel algorithm is proposed for optimization of contrast detection power by including probabilistic behavioral information, based on pilot data, in the genetic algorithm. As a particular application, a recognition memory task is studied and the design matrix optimized for contrasts involving the familiarity of individual items (pictures of objects) and the recollection of qualitative information associated with the items (left/right orientation). Optimization of contrast efficiency is a complicated issue whenever subjects’ responses are not deterministic but probabilistic. Contrast efficiencies are not predictable unless behavioral responses are included in the design optimization. However, available software for design optimization does not include options for probabilistic behavioral constraints. If the anticipated behavioral responses are included in the optimization algorithm, the design is optimal for the assumed behavioral responses, and the resulting contrast efficiency is greater than what either a block design or a random design can achieve. Furthermore, improvements of contrast detection power depend strongly on the behavioral probabilities, the perceived randomness, and the contrast of interest. The present genetic algorithm can be applied to any case in which fMRI contrasts are dependent on probabilistic responses that can be estimated from pilot data. PMID:22326984
Toward detecting deception in intelligent systems
NASA Astrophysics Data System (ADS)
Santos, Eugene, Jr.; Johnson, Gregory, Jr.
2004-08-01
Contemporary decision makers often must choose a course of action using knowledge from several sources. Knowledge may be provided from many diverse sources including electronic sources such as knowledge-based diagnostic or decision support systems or through data mining techniques. As the decision maker becomes more dependent on these electronic information sources, detecting deceptive information from these sources becomes vital to making a correct, or at least more informed, decision. This applies to unintentional disinformation as well as intentional misinformation. Our ongoing research focuses on employing models of deception and deception detection from the fields of psychology and cognitive science to these systems as well as implementing deception detection algorithms for probabilistic intelligent systems. The deception detection algorithms are used to detect, classify and correct attempts at deception. Algorithms for detecting unexpected information rely upon a prediction algorithm from the collaborative filtering domain to predict agent responses in a multi-agent system.
Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm
NASA Astrophysics Data System (ADS)
Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong
2018-06-01
The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
NASA Technical Reports Server (NTRS)
Park, Thomas; Smith, Austin; Oliver, T. Emerson
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.
Negative Selection Algorithm for Aircraft Fault Detection
NASA Technical Reports Server (NTRS)
Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.
2004-01-01
We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.
Community detection in complex networks by using membrane algorithm
NASA Astrophysics Data System (ADS)
Liu, Chuang; Fan, Linan; Liu, Zhou; Dai, Xiang; Xu, Jiamei; Chang, Baoren
Community detection in complex networks is a key problem of network analysis. In this paper, a new membrane algorithm is proposed to solve the community detection in complex networks. The proposed algorithm is based on membrane systems, which consists of objects, reaction rules, and a membrane structure. Each object represents a candidate partition of a complex network, and the quality of objects is evaluated according to network modularity. The reaction rules include evolutionary rules and communication rules. Evolutionary rules are responsible for improving the quality of objects, which employ the differential evolutionary algorithm to evolve objects. Communication rules implement the information exchanged among membranes. Finally, the proposed algorithm is evaluated on synthetic, real-world networks with real partitions known and the large-scaled networks with real partitions unknown. The experimental results indicate the superior performance of the proposed algorithm in comparison with other experimental algorithms.
Karnowski, T P; Aykac, D; Giancardo, L; Li, Y; Nichols, T; Tobin, K W; Chaum, E
2011-01-01
The automated detection of diabetic retinopathy and other eye diseases in images of the retina has great promise as a low-cost method for broad-based screening. Many systems in the literature which perform automated detection include a quality estimation step and physiological feature detection, including the vascular tree and the optic nerve / macula location. In this work, we study the robustness of an automated disease detection method with respect to the accuracy of the optic nerve location and the quality of the images obtained as judged by a quality estimation algorithm. The detection algorithm features microaneurysm and exudate detection followed by feature extraction on the detected population to describe the overall retina image. Labeled images of retinas ground-truthed to disease states are used to train a supervised learning algorithm to identify the disease state of the retina image and exam set. Under the restrictions of high confidence optic nerve detections and good quality imagery, the system achieves a sensitivity and specificity of 94.8% and 78.7% with area-under-curve of 95.3%. Analysis of the effect of constraining quality and the distinction between mild non-proliferative diabetic retinopathy, normal retina images, and more severe disease states is included.
A new method of real-time detection of changes in periodic data stream
NASA Astrophysics Data System (ADS)
Lyu, Chen; Lu, Guoliang; Cheng, Bin; Zheng, Xiangwei
2017-07-01
The change point detection in periodic time series is much desirable in many practical usages. We present a novel algorithm for this task, which includes two phases: 1) anomaly measure- on the basis of a typical regression model, we propose a new computation method to measure anomalies in time series which does not require any reference data from other measurement(s); 2) change detection- we introduce a new martingale test for detection which can be operated in an unsupervised and nonparametric way. We have conducted extensive experiments to systematically test our algorithm. The results make us believe that our algorithm can be directly applicable in many real-world change-point-detection applications.
NASA Astrophysics Data System (ADS)
Dobeck, Gerald J.; Cobb, J. Tory
2002-08-01
The high-resolution sonar is one of the principal sensors used by the Navy to detect and classify sea mines in minehunting operations. For such sonar systems, substantial effort has been devoted to the development of automated detection and classification (D/C) algorithms. These have been spurred by several factors including (1) aids for operators to reduce work overload, (2) more optimal use of all available data, and (3) the introduction of unmanned minehunting systems. The environments where sea mines are typically laid (harbor areas, shipping lanes, and the littorals) give rise to many false alarms caused by natural, biologic, and man-made clutter. The objective of the automated D/C algorithms is to eliminate most of these false alarms while still maintaining a very high probability of mine detection and classification (PdPc). In recent years, the benefits of fusing the outputs of multiple D/C algorithms have been studied. We refer to this as Algorithm Fusion. The results have been remarkable, including reliable robustness to new environments. The Quadratic Penalty Function Support Vector Machine (QPFSVM) algorithm to aid in the automated detection and classification of sea mines is introduced in this paper. The QPFSVM algorithm is easy to train, simple to implement, and robust to feature space dimension. Outputs of successive SVM algorithms are cascaded in stages (fused) to improve the Probability of Classification (Pc) and reduce the number of false alarms. Even though our experience has been gained in the area of sea mine detection and classification, the principles described herein are general and can be applied to fusion of any D/C problem (e.g., automated medical diagnosis or automatic target recognition for ballistic missile defense).
Karimi, Mohammad H; Asemani, Davud
2014-05-01
Ceramic and tile industries should indispensably include a grading stage to quantify the quality of products. Actually, human control systems are often used for grading purposes. An automatic grading system is essential to enhance the quality control and marketing of the products. Since there generally exist six different types of defects originating from various stages of tile manufacturing lines with distinct textures and morphologies, many image processing techniques have been proposed for defect detection. In this paper, a survey has been made on the pattern recognition and image processing algorithms which have been used to detect surface defects. Each method appears to be limited for detecting some subgroup of defects. The detection techniques may be divided into three main groups: statistical pattern recognition, feature vector extraction and texture/image classification. The methods such as wavelet transform, filtering, morphology and contourlet transform are more effective for pre-processing tasks. Others including statistical methods, neural networks and model-based algorithms can be applied to extract the surface defects. Although, statistical methods are often appropriate for identification of large defects such as Spots, but techniques such as wavelet processing provide an acceptable response for detection of small defects such as Pinhole. A thorough survey is made in this paper on the existing algorithms in each subgroup. Also, the evaluation parameters are discussed including supervised and unsupervised parameters. Using various performance parameters, different defect detection algorithms are compared and evaluated. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Evaluation of Moving Object Detection Based on Various Input Noise Using Fixed Camera
NASA Astrophysics Data System (ADS)
Kiaee, N.; Hashemizadeh, E.; Zarrinpanjeh, N.
2017-09-01
Detecting and tracking objects in video has been as a research area of interest in the field of image processing and computer vision. This paper evaluates the performance of a novel method for object detection algorithm in video sequences. This process helps us to know the advantage of this method which is being used. The proposed framework compares the correct and wrong detection percentage of this algorithm. This method was evaluated with the collected data in the field of urban transport which include car and pedestrian in fixed camera situation. The results show that the accuracy of the algorithm will decreases because of image resolution reduction.
Special-effect edit detection using VideoTrails: a comparison with existing techniques
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.
1998-12-01
Video segmentation plays an integral role in many multimedia applications, such as digital libraries, content management systems, and various other video browsing, indexing, and retrieval systems. Many algorithms for segmentation of video have appeared within the past few years. Most of these algorithms perform well on cuts, but yield poor performance on gradual transitions or special effects edits. A complete video segmentation system must also achieve good performance on special effect edit detection. In this paper, we discuss the performance of our Video Trails-based algorithms, with other existing special effect edit-detection algorithms within the literature. Results from experiments testing for the ability to detect edits from TV programs, ranging from commercials to news magazine programs, including diverse special effect edits, which we have introduced.
Electro-optic tracking R&D for defense surveillance
NASA Astrophysics Data System (ADS)
Sutherland, Stuart; Woodruff, Chris J.
1995-09-01
Two aspects of work on automatic target detection and tracking for electro-optic (EO) surveillance are described. Firstly, a detection and tracking algorithm test-bed developed by DSTO and running on a PC under Windows NT is being used to assess candidate algorithms for unresolved and minimally resolved target detection. The structure of this test-bed is described and examples are given of its user interfaces and outputs. Secondly, a development by Australian industry under a Defence-funded contract, of a reconfigurable generic track processor (GTP) is outlined. The GTP will include reconfigurable image processing stages and target tracking algorithms. It will be used to demonstrate to the Australian Defence Force automatic detection and tracking capabilities, and to serve as a hardware base for real time algorithm refinement.
An Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision
2018-01-01
ARL-TR-8269 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision...needed. Do not return it to the originator. ARL-TR-8269 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources
Passman, Rod S; Rogers, John D; Sarkar, Shantanu; Reiland, Jerry; Reisfeld, Erin; Koehler, Jodi; Mittal, Suneet
2017-07-01
Undersensing of premature ventricular beats and low-amplitude R waves are primary causes for inappropriate bradycardia and pause detections in insertable cardiac monitors (ICMs). The purpose of this study was to develop and validate an enhanced algorithm to reduce inappropriately detected bradycardia and pause episodes. Independent data sets to develop and validate the enhanced algorithm were derived from a database of ICM-detected bradycardia and pause episodes in de-identified patients monitored for unexplained syncope. The original algorithm uses an auto-adjusting sensitivity threshold for R-wave sensing to detect tachycardia and avoid T-wave oversensing. In the enhanced algorithm, a second sensing threshold is used with a long blanking and fixed lower sensitivity threshold, looking for evidence of undersensed signals. Data reported includes percent change in appropriate and inappropriate bradycardia and pause detections as well as changes in episode detection sensitivity and positive predictive value with the enhanced algorithm. The validation data set, from 663 consecutive patients, consisted of 4904 (161 patients) bradycardia and 2582 (133 patients) pause episodes, of which 2976 (61%) and 996 (39%) were appropriately detected bradycardia and pause episodes. The enhanced algorithm reduced inappropriate bradycardia and pause episodes by 95% and 47%, respectively, with 1.7% and 0.6% reduction in appropriate episodes, respectively. The average episode positive predictive value improved by 62% (P < .001) for bradycardia detection and by 26% (P < .001) for pause detection, with an average relative sensitivity of 95% (P < .001) and 99% (P = .5), respectively. The enhanced dual sense algorithm for bradycardia and pause detection in ICMs substantially reduced inappropriate episode detection with a minimal reduction in true episode detection. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Pürerfellner, Helmut; Sanders, Prashanthan; Sarkar, Shantanu; Reisfeld, Erin; Reiland, Jerry; Koehler, Jodi; Pokushalov, Evgeny; Urban, Luboš; Dekker, Lukas R C
2017-10-03
Intermittent change in p-wave discernibility during periods of ectopy and sinus arrhythmia is a cause of inappropriate atrial fibrillation (AF) detection in insertable cardiac monitors (ICM). To address this, we developed and validated an enhanced AF detection algorithm. Atrial fibrillation detection in Reveal LINQ ICM uses patterns of incoherence in RR intervals and absence of P-wave evidence over a 2-min period. The enhanced algorithm includes P-wave evidence during RR irregularity as evidence of sinus arrhythmia or ectopy to adaptively optimize sensitivity for AF detection. The algorithm was developed and validated using Holter data from the XPECT and LINQ Usability studies which collected surface electrocardiogram (ECG) and continuous ICM ECG over a 24-48 h period. The algorithm detections were compared with Holter annotations, performed by multiple reviewers, to compute episode and duration detection performance. The validation dataset comprised of 3187 h of valid Holter and LINQ recordings from 138 patients, with true AF in 37 patients yielding 108 true AF episodes ≥2-min and 449 h of AF. The enhanced algorithm reduced inappropriately detected episodes by 49% and duration by 66% with <1% loss in true episodes or duration. The algorithm correctly identified 98.9% of total AF duration and 99.8% of total sinus or non-AF rhythm duration. The algorithm detected 97.2% (99.7% per-patient average) of all AF episodes ≥2-min, and 84.9% (95.3% per-patient average) of detected episodes involved AF. An enhancement that adapts sensitivity for AF detection reduced inappropriately detected episodes and duration with minimal reduction in sensitivity. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Cardiology
Detecting and visualizing weak signatures in hyperspectral data
NASA Astrophysics Data System (ADS)
MacPherson, Duncan James
This thesis evaluates existing techniques for detecting weak spectral signatures from remotely sensed hyperspectral data. Algorithms are presented that successfully detect hard-to-find 'mystery' signatures in unknown cluttered backgrounds. The term 'mystery' is used to describe a scenario where the spectral target and background endmembers are unknown. Sub-Pixel analysis and background suppression are used to find deeply embedded signatures which can be less than 10% of the total signal strength. Existing 'mystery target' detection algorithms are derived and compared. Several techniques are shown to be superior both visually and quantitatively. Detection performance is evaluated using confidence metrics that are developed. A multiple algorithm approach is shown to improve detection confidence significantly. Although the research focuses on remote sensing applications, the algorithms presented can be applied to a wide variety of diverse fields such as medicine, law enforcement, manufacturing, earth science, food production, and astrophysics. The algorithms are shown to be general and can be applied to both the reflective and emissive parts of the electromagnetic spectrum. The application scope is a broad one and the final results open new opportunities for many specific applications including: land mine detection, pollution and hazardous waste detection, crop abundance calculations, volcanic activity monitoring, detecting diseases in food, automobile or airplane target recognition, cancer detection, mining operations, extracting galactic gas emissions, etc.
A real time microcomputer implementation of sensor failure detection for turbofan engines
NASA Technical Reports Server (NTRS)
Delaat, John C.; Merrill, Walter C.
1989-01-01
An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.
Li, Qi; Melton, Kristin; Lingren, Todd; Kirkendall, Eric S; Hall, Eric; Zhai, Haijun; Ni, Yizhao; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre
2014-01-01
Although electronic health records (EHRs) have the potential to provide a foundation for quality and safety algorithms, few studies have measured their impact on automated adverse event (AE) and medical error (ME) detection within the neonatal intensive care unit (NICU) environment. This paper presents two phenotyping AE and ME detection algorithms (ie, IV infiltrations, narcotic medication oversedation and dosing errors) and describes manual annotation of airway management and medication/fluid AEs from NICU EHRs. From 753 NICU patient EHRs from 2011, we developed two automatic AE/ME detection algorithms, and manually annotated 11 classes of AEs in 3263 clinical notes. Performance of the automatic AE/ME detection algorithms was compared to trigger tool and voluntary incident reporting results. AEs in clinical notes were double annotated and consensus achieved under neonatologist supervision. Sensitivity, positive predictive value (PPV), and specificity are reported. Twelve severe IV infiltrates were detected. The algorithm identified one more infiltrate than the trigger tool and eight more than incident reporting. One narcotic oversedation was detected demonstrating 100% agreement with the trigger tool. Additionally, 17 narcotic medication MEs were detected, an increase of 16 cases over voluntary incident reporting. Automated AE/ME detection algorithms provide higher sensitivity and PPV than currently used trigger tools or voluntary incident-reporting systems, including identification of potential dosing and frequency errors that current methods are unequipped to detect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Improved Snow Mapping Accuracy with Revised MODIS Snow Algorithm
NASA Technical Reports Server (NTRS)
Riggs, George; Hall, Dorothy K.
2012-01-01
The MODIS snow cover products have been used in over 225 published studies. From those reports, and our ongoing analysis, we have learned about the accuracy and errors in the snow products. Revisions have been made in the algorithms to improve the accuracy of snow cover detection in Collection 6 (C6), the next processing/reprocessing of the MODIS data archive planned to start in September 2012. Our objective in the C6 revision of the MODIS snow-cover algorithms and products is to maximize the capability to detect snow cover while minimizing snow detection errors of commission and omission. While the basic snow detection algorithm will not change, new screens will be applied to alleviate snow detection commission and omission errors, and only the fractional snow cover (FSC) will be output (the binary snow cover area (SCA) map will no longer be included).
Failure detection and isolation analysis of a redundant strapdown inertial measurement unit
NASA Technical Reports Server (NTRS)
Motyka, P.; Landey, M.; Mckern, R.
1981-01-01
The objective of this study was to define and develop techniques for failure detection and isolation (FDI) algorithms for a dual fail/operational redundant strapdown inertial navigation system are defined and developed. The FDI techniques chosen include provisions for hard and soft failure detection in the context of flight control and navigation. Analyses were done to determine error detection and switching levels for the inertial navigation system, which is intended for a conventional takeoff or landing (CTOL) operating environment. In addition, investigations of false alarms and missed alarms were included for the FDI techniques developed, along with the analyses of filters to be used in conjunction with FDI processing. Two specific FDI algorithms were compared: the generalized likelihood test and the edge vector test. A deterministic digital computer simulation was used to compare and evaluate the algorithms and FDI systems.
NASA Technical Reports Server (NTRS)
Delaat, John C.; Merrill, Walter C.
1990-01-01
The objective of the Advanced Detection, Isolation, and Accommodation Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, an algorithm was developed which detects, isolates, and accommodates sensor failures by using analytical redundancy. The performance of this algorithm was evaluated on a real time engine simulation and was demonstrated on a full scale F100 turbofan engine. The real time implementation of the algorithm is described. The implementation used state-of-the-art microprocessor hardware and software, including parallel processing and high order language programming.
Real-time failure control (SAFD)
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.
1990-01-01
The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.
Bio-ALIRT biosurveillance detection algorithm evaluation.
Siegrist, David; Pavlin, J
2004-09-24
Early detection of disease outbreaks by a medical biosurveillance system relies on two major components: 1) the contribution of early and reliable data sources and 2) the sensitivity, specificity, and timeliness of biosurveillance detection algorithms. This paper describes an effort to assess leading detection algorithms by arranging a common challenge problem and providing a common data set. The objectives of this study were to determine whether automated detection algorithms can reliably and quickly identify the onset of natural disease outbreaks that are surrogates for possible terrorist pathogen releases, and do so at acceptable false-alert rates (e.g., once every 2-6 weeks). Historic de-identified data were obtained from five metropolitan areas over 23 months; these data included International Classification of Diseases, Ninth Revision (ICD-9) codes related to respiratory and gastrointestinal illness syndromes. An outbreak detection group identified and labeled two natural disease outbreaks in these data and provided them to analysts for training of detection algorithms. All outbreaks in the remaining test data were identified but not revealed to the detection groups until after their analyses. The algorithms established a probability of outbreak for each day's counts. The probability of outbreak was assessed as an "actual" alert for different false-alert rates. The best algorithms were able to detect all of the outbreaks at false-alert rates of one every 2-6 weeks. They were often able to detect for the same day human investigators had identified as the true start of the outbreak. Because minimal data exists for an actual biologic attack, determining how quickly an algorithm might detect such an attack is difficult. However, application of these algorithms in combination with other data-analysis methods to historic outbreak data indicates that biosurveillance techniques for analyzing syndrome counts can rapidly detect seasonal respiratory and gastrointestinal illness outbreaks. Further research is needed to assess the value of electronic data sources for predictive detection. In addition, simulations need to be developed and implemented to better characterize the size and type of biologic attack that can be detected by current methods by challenging them under different projected operational conditions.
A lightweight QRS detector for single lead ECG signals using a max-min difference algorithm.
Pandit, Diptangshu; Zhang, Li; Liu, Chengyu; Chattopadhyay, Samiran; Aslam, Nauman; Lim, Chee Peng
2017-06-01
Detection of the R-peak pertaining to the QRS complex of an ECG signal plays an important role for the diagnosis of a patient's heart condition. To accurately identify the QRS locations from the acquired raw ECG signals, we need to handle a number of challenges, which include noise, baseline wander, varying peak amplitudes, and signal abnormality. This research aims to address these challenges by developing an efficient lightweight algorithm for QRS (i.e., R-peak) detection from raw ECG signals. A lightweight real-time sliding window-based Max-Min Difference (MMD) algorithm for QRS detection from Lead II ECG signals is proposed. Targeting to achieve the best trade-off between computational efficiency and detection accuracy, the proposed algorithm consists of five key steps for QRS detection, namely, baseline correction, MMD curve generation, dynamic threshold computation, R-peak detection, and error correction. Five annotated databases from Physionet are used for evaluating the proposed algorithm in R-peak detection. Integrated with a feature extraction technique and a neural network classifier, the proposed ORS detection algorithm has also been extended to undertake normal and abnormal heartbeat detection from ECG signals. The proposed algorithm exhibits a high degree of robustness in QRS detection and achieves an average sensitivity of 99.62% and an average positive predictivity of 99.67%. Its performance compares favorably with those from the existing state-of-the-art models reported in the literature. In regards to normal and abnormal heartbeat detection, the proposed QRS detection algorithm in combination with the feature extraction technique and neural network classifier achieves an overall accuracy rate of 93.44% based on an empirical evaluation using the MIT-BIH Arrhythmia data set with 10-fold cross validation. In comparison with other related studies, the proposed algorithm offers a lightweight adaptive alternative for R-peak detection with good computational efficiency. The empirical results indicate that it not only yields a high accuracy rate in QRS detection, but also exhibits efficient computational complexity at the order of O(n), where n is the length of an ECG signal. Copyright © 2017 Elsevier B.V. All rights reserved.
Analysis of Community Detection Algorithms for Large Scale Cyber Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mane, Prachita; Shanbhag, Sunanda; Kamath, Tanmayee
The aim of this project is to use existing community detection algorithms on an IP network dataset to create supernodes within the network. This study compares the performance of different algorithms on the network in terms of running time. The paper begins with an introduction to the concept of clustering and community detection followed by the research question that the team aimed to address. Further the paper describes the graph metrics that were considered in order to shortlist algorithms followed by a brief explanation of each algorithm with respect to the graph metric on which it is based. The nextmore » section in the paper describes the methodology used by the team in order to run the algorithms and determine which algorithm is most efficient with respect to running time. Finally, the last section of the paper includes the results obtained by the team and a conclusion based on those results as well as future work.« less
Lykiardopoulos, Byron; Hagström, Hannes; Fredrikson, Mats; Ignatova, Simone; Stål, Per; Hultcrantz, Rolf; Ekstedt, Mattias; Kechagias, Stergios
2016-01-01
Detection of advanced fibrosis (F3-F4) in nonalcoholic fatty liver disease (NAFLD) is important for ascertaining prognosis. Serum markers have been proposed as alternatives to biopsy. We attempted to develop a novel algorithm for detection of advanced fibrosis based on a more efficient combination of serological markers and to compare this with established algorithms. We included 158 patients with biopsy-proven NAFLD. Of these, 38 had advanced fibrosis. The following fibrosis algorithms were calculated: NAFLD fibrosis score, BARD, NIKEI, NASH-CRN regression score, APRI, FIB-4, King´s score, GUCI, Lok index, Forns score, and ELF. Study population was randomly divided in a training and a validation group. A multiple logistic regression analysis using bootstrapping methods was applied to the training group. Among many variables analyzed age, fasting glucose, hyaluronic acid and AST were included, and a model (LINKI-1) for predicting advanced fibrosis was created. Moreover, these variables were combined with platelet count in a mathematical way exaggerating the opposing effects, and alternative models (LINKI-2) were also created. Models were compared using area under the receiver operator characteristic curves (AUROC). Of established algorithms FIB-4 and King´s score had the best diagnostic accuracy with AUROCs 0.84 and 0.83, respectively. Higher accuracy was achieved with the novel LINKI algorithms. AUROCs in the total cohort for LINKI-1 was 0.91 and for LINKI-2 models 0.89. The LINKI algorithms for detection of advanced fibrosis in NAFLD showed better accuracy than established algorithms and should be validated in further studies including larger cohorts.
A Novel Segment-Based Approach for Improving Classification Performance of Transport Mode Detection.
Guvensan, M Amac; Dusun, Burak; Can, Baris; Turkmen, H Irem
2017-12-30
Transportation planning and solutions have an enormous impact on city life. To minimize the transport duration, urban planners should understand and elaborate the mobility of a city. Thus, researchers look toward monitoring people's daily activities including transportation types and duration by taking advantage of individual's smartphones. This paper introduces a novel segment-based transport mode detection architecture in order to improve the results of traditional classification algorithms in the literature. The proposed post-processing algorithm, namely the Healing algorithm, aims to correct the misclassification results of machine learning-based solutions. Our real-life test results show that the Healing algorithm could achieve up to 40% improvement of the classification results. As a result, the implemented mobile application could predict eight classes including stationary, walking, car, bus, tram, train, metro and ferry with a success rate of 95% thanks to the proposed multi-tier architecture and Healing algorithm.
On the robustness of EC-PC spike detection method for online neural recording.
Zhou, Yin; Wu, Tong; Rastegarnia, Amir; Guan, Cuntai; Keefer, Edward; Yang, Zhi
2014-09-30
Online spike detection is an important step to compress neural data and perform real-time neural information decoding. An unsupervised, automatic, yet robust signal processing is strongly desired, thus it can support a wide range of applications. We have developed a novel spike detection algorithm called "exponential component-polynomial component" (EC-PC) spike detection. We firstly evaluate the robustness of the EC-PC spike detector under different firing rates and SNRs. Secondly, we show that the detection Precision can be quantitatively derived without requiring additional user input parameters. We have realized the algorithm (including training) into a 0.13 μm CMOS chip, where an unsupervised, nonparametric operation has been demonstrated. Both simulated data and real data are used to evaluate the method under different firing rates (FRs), SNRs. The results show that the EC-PC spike detector is the most robust in comparison with some popular detectors. Moreover, the EC-PC detector can track changes in the background noise due to the ability to re-estimate the neural data distribution. Both real and synthesized data have been used for testing the proposed algorithm in comparison with other methods, including the absolute thresholding detector (AT), median absolute deviation detector (MAD), nonlinear energy operator detector (NEO), and continuous wavelet detector (CWD). Comparative testing results reveals that the EP-PC detection algorithm performs better than the other algorithms regardless of recording conditions. The EC-PC spike detector can be considered as an unsupervised and robust online spike detection. It is also suitable for hardware implementation. Copyright © 2014 Elsevier B.V. All rights reserved.
An ant colony based algorithm for overlapping community detection in complex networks
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Yanheng; Zhang, Jindong; Liu, Tuming; Zhang, Di
2015-06-01
Community detection is of great importance to understand the structures and functions of networks. Overlap is a significant feature of networks and overlapping community detection has attracted an increasing attention. Many algorithms have been presented to detect overlapping communities. In this paper, we present an ant colony based overlapping community detection algorithm which mainly includes ants' location initialization, ants' movement and post processing phases. An ants' location initialization strategy is designed to identify initial location of ants and initialize label list stored in each node. During the ants' movement phase, the entire ants move according to the transition probability matrix, and a new heuristic information computation approach is redefined to measure similarity between two nodes. Every node keeps a label list through the cooperation made by ants until a termination criterion is reached. A post processing phase is executed on the label list to get final overlapping community structure naturally. We illustrate the capability of our algorithm by making experiments on both synthetic networks and real world networks. The results demonstrate that our algorithm will have better performance in finding overlapping communities and overlapping nodes in synthetic datasets and real world datasets comparing with state-of-the-art algorithms.
Improving Nocturnal Fire Detection with the VIIRS Day-Night Band
NASA Technical Reports Server (NTRS)
Polivka, Thomas N.; Wang, Jun; Ellison, Luke T.; Hyer, Edward J.; Ichoku, Charles M.
2016-01-01
Building on existing techniques for satellite remote sensing of fires, this paper takes advantage of the day-night band (DNB) aboard the Visible Infrared Imaging Radiometer Suite (VIIRS) to develop the Firelight Detection Algorithm (FILDA), which characterizes fire pixels based on both visible-light and infrared (IR) signatures at night. By adjusting fire pixel selection criteria to include visible-light signatures, FILDA allows for significantly improved detection of pixels with smaller and/or cooler subpixel hotspots than the operational Interface Data Processing System (IDPS) algorithm. VIIRS scenes with near-coincident Advanced Spaceborne Thermal Emission and Reflection (ASTER) overpasses are examined after applying the operational VIIRS fire product algorithm and including a modified "candidate fire pixel selection" approach from FILDA that lowers the 4-µm brightness temperature (BT) threshold but includes a minimum DNB radiance. FILDA is shown to be effective in detecting gas flares and characterizing fire lines during large forest fires (such as the Rim Fire in California and High Park fire in Colorado). Compared with the operational VIIRS fire algorithm for the study period, FILDA shows a large increase (up to 90%) in the number of detected fire pixels that can be verified with the finer resolution ASTER data (90 m). Part (30%) of this increase is likely due to a combined use of DNB and lower 4-µm BT thresholds for fire detection in FILDA. Although further studies are needed, quantitative use of the DNB to improve fire detection could lead to reduced response times to wildfires and better estimate of fire characteristics (smoldering and flaming) at night.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Sen, Satyabrata; Berry, M. L..
Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (tomore » be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.« less
Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian
2017-06-01
There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Knowledge-based tracking algorithm
NASA Astrophysics Data System (ADS)
Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.
1990-10-01
This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.
Gatos, Ilias; Tsantis, Stavros; Spiliopoulos, Stavros; Skouroliakou, Aikaterini; Theotokas, Ioannis; Zoumpoulis, Pavlos; Hazle, John D; Kagadis, George C
2015-07-01
Detect and classify focal liver lesions (FLLs) from contrast-enhanced ultrasound (CEUS) imaging by means of an automated quantification algorithm. The proposed algorithm employs a sophisticated segmentation method to detect and contour focal lesions from 52 CEUS video sequences (30 benign and 22 malignant). Lesion detection involves wavelet transform zero crossings utilization as an initialization step to the Markov random field model toward the lesion contour extraction. After FLL detection across frames, time intensity curve (TIC) is computed which provides the contrast agents' behavior at all vascular phases with respect to adjacent parenchyma for each patient. From each TIC, eight features were automatically calculated and employed into the support vector machines (SVMs) classification algorithm in the design of the image analysis model. With regard to FLLs detection accuracy, all lesions detected had an average overlap value of 0.89 ± 0.16 with manual segmentations for all CEUS frame-subsets included in the study. Highest classification accuracy from the SVM model was 90.3%, misdiagnosing three benign and two malignant FLLs with sensitivity and specificity values of 93.1% and 86.9%, respectively. The proposed quantification system that employs FLLs detection and classification algorithms may be of value to physicians as a second opinion tool for avoiding unnecessary invasive procedures.
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
Statistical algorithms improve accuracy of gene fusion detection
Hsieh, Gillian; Bierman, Rob; Szabo, Linda; Lee, Alex Gia; Freeman, Donald E.; Watson, Nathaniel; Sweet-Cordero, E. Alejandro
2017-01-01
Abstract Gene fusions are known to play critical roles in tumor pathogenesis. Yet, sensitive and specific algorithms to detect gene fusions in cancer do not currently exist. In this paper, we present a new statistical algorithm, MACHETE (Mismatched Alignment CHimEra Tracking Engine), which achieves highly sensitive and specific detection of gene fusions from RNA-Seq data, including the highest Positive Predictive Value (PPV) compared to the current state-of-the-art, as assessed in simulated data. We show that the best performing published algorithms either find large numbers of fusions in negative control data or suffer from low sensitivity detecting known driving fusions in gold standard settings, such as EWSR1-FLI1. As proof of principle that MACHETE discovers novel gene fusions with high accuracy in vivo, we mined public data to discover and subsequently PCR validate novel gene fusions missed by other algorithms in the ovarian cancer cell line OVCAR3. These results highlight the gains in accuracy achieved by introducing statistical models into fusion detection, and pave the way for unbiased discovery of potentially driving and druggable gene fusions in primary tumors. PMID:28541529
On-Board Cryospheric Change Detection By The Autonomous Sciencecraft Experiment
NASA Astrophysics Data System (ADS)
Doggett, T.; Greeley, R.; Castano, R.; Cichy, B.; Chien, S.; Davies, A.; Baker, V.; Dohm, J.; Ip, F.
2004-12-01
The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1) with the Hyperion hyper-spectral visible/near-IR spectrometer. ASE science activities include autonomous monitoring of cryopsheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. This would have application to the study of cryospheres on Earth, Mars and the icy moons of the outer solar system. A cryosphere classification algorithm, in combination with a previously developed cloud algorithm [1] has been tested on-board ten times from March through August 2004. The cloud algorithm correctly screened out three scenes with total cloud cover, while the cryosphere algorithm detected alpine snow cover in the Rocky Mountains, lake thaw near Madison, Wisconsin, and the presence and subsequent break-up of sea ice in the Barrow Strait of the Canadian Arctic. Hyperion has 220 bands ranging from 400 to 2400 nm, with a spatial resolution of 30 m/pixel and a spectral resolution of 10 nm. Limited on-board memory and processing speed imposed the constraint that only partially processed Level 0.5 data with dark image subtraction and gain factors applied, but not full radiometric calibration. In addition, a maximum of 12 bands could be used for any stacked sequence of algorithms run for a scene on-board. The cryosphere algorithm was developed to classify snow, water, ice and land, using six Hyperion bands at 427, 559, 661, 864, 1245 and 1649 nm. Of these, only 427 nm does overlap with the cloud algorithm. The cloud algorithm was developed with Level 1 data, which introduces complications because of the incomplete calibration of SWIR in Level 0.5 data, including a high level of noise in the 1377 nm band used by the cloud algorithm. Development of a more robust cryosphere classifier, including cloud classification specifically adapted to Level 0.5, is in progress for deployment on EO-1 as part of continued ASE operations. [1] Griffin, M.K. et al., Cloud Cover Detection Algorithm For EO-1 Hyperion Imagery, SPIE 17, 2003.
Han, Zhaoying; Thornton-Wells, Tricia A.; Dykens, Elisabeth M.; Gore, John C.; Dawant, Benoit M.
2014-01-01
Deformation Based Morphometry (DBM) is a widely used method for characterizing anatomical differences across groups. DBM is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to a DBM atlas. Although several studies have compared non-rigid registration algorithms for segmentation tasks, few studies have compared the effect of the registration algorithms on group differences that may be uncovered through DBM. In this study, we compared group atlas creation and DBM results obtained with five well-established non-rigid registration algorithms using thirteen subjects with Williams Syndrome (WS) and thirteen Normal Control (NC) subjects. The five non-rigid registration algorithms include: (1) The Adaptive Bases Algorithm (ABA); (2) The Image Registration Toolkit (IRTK); (3) The FSL Nonlinear Image Registration Tool (FSL); (4) The Automatic Registration Tool (ART); and (5) the normalization algorithm available in SPM8. Results indicate that the choice of algorithm has little effect on the creation of group atlases. However, regions of differences between groups detected with DBM vary from algorithm to algorithm both qualitatively and quantitatively. The unique nature of the data set used in this study also permits comparison of visible anatomical differences between the groups and regions of difference detected by each algorithm. Results show that the interpretation of DBM results is difficult. Four out of the five algorithms we have evaluated detect bilateral differences between the two groups in the insular cortex, the basal ganglia, orbitofrontal cortex, as well as in the cerebellum. These correspond to differences that have been reported in the literature and that are visible in our samples. But our results also show that some algorithms detect regions that are not detected by the others and that the extent of the detected regions varies from algorithm to algorithm. These results suggest that using more than one algorithm when performing DBM studies would increase confidence in the results. Properties of the algorithms such as the similarity measure they maximize and the regularity of the deformation fields, as well as the location of differences detected with DBM, also need to be taken into account in the interpretation process. PMID:22459439
Color object detection using spatial-color joint probability functions.
Luo, Jiebo; Crandall, David
2006-06-01
Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large number of exemplars (for rigid objects) or a large amount of human intuition (for nonrigid objects) to develop a robust algorithm. We present a robust algorithm designed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color joint probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope (i.e., size and location) for the object. Experimental results demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely related algorithm based on color co-occurrence histograms by a decisive margin.
Algorithms used in the Airborne Lidar Processing System (ALPS)
Nagle, David B.; Wright, C. Wayne
2016-05-23
The Airborne Lidar Processing System (ALPS) analyzes Experimental Advanced Airborne Research Lidar (EAARL) data—digitized laser-return waveforms, position, and attitude data—to derive point clouds of target surfaces. A full-waveform airborne lidar system, the EAARL seamlessly and simultaneously collects mixed environment data, including submerged, sub-aerial bare earth, and vegetation-covered topographies.ALPS uses three waveform target-detection algorithms to determine target positions within a given waveform: centroid analysis, leading edge detection, and bottom detection using water-column backscatter modeling. The centroid analysis algorithm detects opaque hard surfaces. The leading edge algorithm detects topography beneath vegetation and shallow, submerged topography. The bottom detection algorithm uses water-column backscatter modeling for deeper submerged topography in turbid water.The report describes slant range calculations and explains how ALPS uses laser range and orientation measurements to project measurement points into the Universal Transverse Mercator coordinate system. Parameters used for coordinate transformations in ALPS are described, as are Interactive Data Language-based methods for gridding EAARL point cloud data to derive digital elevation models. Noise reduction in point clouds through use of a random consensus filter is explained, and detailed pseudocode, mathematical equations, and Yorick source code accompany the report.
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Coraor, Lee
2000-01-01
The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.
NASA Astrophysics Data System (ADS)
Wu, Yu-Jie; Lin, Guan-Wei
2017-04-01
Since 1999, Taiwan has experienced a rapid rise in the number of landslides, and the number even reached a peak after the 2009 Typhoon Morakot. Although it is proved that the ground-motion signals induced by slope processes could be recorded by seismograph, it is difficult to be distinguished from continuous seismic records due to the lack of distinct P and S waves. In this study, we combine three common seismic detectors including the short-term average/long-term average (STA/LTA) approach, and two diagnostic functions of moving average and scintillation index. Based on these detectors, we have established an auto-detection algorithm of landslide-quakes and the detection thresholds are defined to distinguish landslide-quake from earthquakes and background noises. To further improve the proposed detection algorithm, we apply it to seismic archives recorded by Broadband Array in Taiwan for Seismology (BATS) during the 2009 Typhoon Morakots and consequently the discrete landslide-quakes detected by the automatic algorithm are located. The detection algorithm show that the landslide-detection results are consistent with that of visual inspection and hence can be used to automatically monitor landslide-quakes.
Using Machine Learning for Advanced Anomaly Detection and Classification
NASA Astrophysics Data System (ADS)
Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J.
2016-09-01
Machine Learning (ML) techniques have successfully been used in a wide variety of applications to automatically detect and potentially classify changes in activity, or a series of activities by utilizing large amounts data, sometimes even seemingly-unrelated data. The amount of data being collected, processed, and stored in the Space Situational Awareness (SSA) domain has grown at an exponential rate and is now better suited for ML. This paper describes development of advanced algorithms to deliver significant improvements in characterization of deep space objects and indication and warning (I&W) using a global network of telescopes that are collecting photometric data on a multitude of space-based objects. The Phase II Air Force Research Laboratory (AFRL) Small Business Innovative Research (SBIR) project Autonomous Characterization Algorithms for Change Detection and Characterization (ACDC), contracted to ExoAnalytic Solutions Inc. is providing the ability to detect and identify photometric signature changes due to potential space object changes (e.g. stability, tumble rate, aspect ratio), and correlate observed changes to potential behavioral changes using a variety of techniques, including supervised learning. Furthermore, these algorithms run in real-time on data being collected and processed by the ExoAnalytic Space Operations Center (EspOC), providing timely alerts and warnings while dynamically creating collection requirements to the EspOC for the algorithms that generate higher fidelity I&W. This paper will discuss the recently implemented ACDC algorithms, including the general design approach and results to date. The usage of supervised algorithms, such as Support Vector Machines, Neural Networks, k-Nearest Neighbors, etc., and unsupervised algorithms, for example k-means, Principle Component Analysis, Hierarchical Clustering, etc., and the implementations of these algorithms is explored. Results of applying these algorithms to EspOC data both in an off-line "pattern of life" analysis as well as using the algorithms on-line in real-time, meaning as data is collected, will be presented. Finally, future work in applying ML for SSA will be discussed.
Spectral Target Detection using Schroedinger Eigenmaps
NASA Astrophysics Data System (ADS)
Dorado-Munoz, Leidy P.
Applications of optical remote sensing processes include environmental monitoring, military monitoring, meteorology, mapping, surveillance, etc. Many of these tasks include the detection of specific objects or materials, usually few or small, which are surrounded by other materials that clutter the scene and hide the relevant information. This target detection process has been boosted lately by the use of hyperspectral imagery (HSI) since its high spectral dimension provides more detailed spectral information that is desirable in data exploitation. Typical spectral target detectors rely on statistical or geometric models to characterize the spectral variability of the data. However, in many cases these parametric models do not fit well HSI data that impacts the detection performance. On the other hand, non-linear transformation methods, mainly based on manifold learning algorithms, have shown a potential use in HSI transformation, dimensionality reduction and classification. In target detection, non-linear transformation algorithms are used as preprocessing techniques that transform the data to a more suitable lower dimensional space, where the statistical or geometric detectors are applied. One of these non-linear manifold methods is the Schroedinger Eigenmaps (SE) algorithm that has been introduced as a technique for semi-supervised classification. The core tool of the SE algorithm is the Schroedinger operator that includes a potential term that encodes prior information about the materials present in a scene, and enables the embedding to be steered in some convenient directions in order to cluster similar pixels together. A completely novel target detection methodology based on SE algorithm is proposed for the first time in this thesis. The proposed methodology does not just include the transformation of the data to a lower dimensional space but also includes the definition of a detector that capitalizes on the theory behind SE. The fact that target pixels and those similar pixels are clustered in a predictable region of the low-dimensional representation is used to define a decision rule that allows one to identify target pixels over the rest of pixels in a given image. In addition, a knowledge propagation scheme is used to combine spectral and spatial information as a means to propagate the "potential constraints" to nearby points. The propagation scheme is introduced to reinforce weak connections and improve the separability between most of the target pixels and the background. Experiments using different HSI data sets are carried out in order to test the proposed methodology. The assessment is performed from a quantitative and qualitative point of view, and by comparing the SE-based methodology against two other detection methodologies that use linear/non-linear algorithms as transformations and the well-known Adaptive Coherence/Cosine Estimator (ACE) detector. Overall results show that the SE-based detector outperforms the other two detection methodologies, which indicates the usefulness of the SE transformation in spectral target detection problems.
Improved detection of soma location and morphology in fluorescence microscopy images of neurons.
Kayasandik, Cihan Bilge; Labate, Demetrio
2016-12-01
Automated detection and segmentation of somas in fluorescent images of neurons is a major goal in quantitative studies of neuronal networks, including applications of high-content-screenings where it is required to quantify multiple morphological properties of neurons. Despite recent advances in image processing targeted to neurobiological applications, existing algorithms of soma detection are often unreliable, especially when processing fluorescence image stacks of neuronal cultures. In this paper, we introduce an innovative algorithm for the detection and extraction of somas in fluorescent images of networks of cultured neurons where somas and other structures exist in the same fluorescent channel. Our method relies on a new geometrical descriptor called Directional Ratio and a collection of multiscale orientable filters to quantify the level of local isotropy in an image. To optimize the application of this approach, we introduce a new construction of multiscale anisotropic filters that is implemented by separable convolution. Extensive numerical experiments using 2D and 3D confocal images show that our automated algorithm reliably detects somas, accurately segments them, and separates contiguous ones. We include a detailed comparison with state-of-the-art existing methods to demonstrate that our algorithm is extremely competitive in terms of accuracy, reliability and computational efficiency. Our algorithm will facilitate the development of automated platforms for high content neuron image processing. A Matlab code is released open-source and freely available to the scientific community. Copyright © 2016 Elsevier B.V. All rights reserved.
van Mourik, Maaike S M; van Duijn, Pleun Joppe; Moons, Karel G M; Bonten, Marc J M; Lee, Grace M
2015-01-01
Objective Measuring the incidence of healthcare-associated infections (HAI) is of increasing importance in current healthcare delivery systems. Administrative data algorithms, including (combinations of) diagnosis codes, are commonly used to determine the occurrence of HAI, either to support within-hospital surveillance programmes or as free-standing quality indicators. We conducted a systematic review evaluating the diagnostic accuracy of administrative data for the detection of HAI. Methods Systematic search of Medline, Embase, CINAHL and Cochrane for relevant studies (1995–2013). Methodological quality assessment was performed using QUADAS-2 criteria; diagnostic accuracy estimates were stratified by HAI type and key study characteristics. Results 57 studies were included, the majority aiming to detect surgical site or bloodstream infections. Study designs were very diverse regarding the specification of their administrative data algorithm (code selections, follow-up) and definitions of HAI presence. One-third of studies had important methodological limitations including differential or incomplete HAI ascertainment or lack of blinding of assessors. Observed sensitivity and positive predictive values of administrative data algorithms for HAI detection were very heterogeneous and generally modest at best, both for within-hospital algorithms and for formal quality indicators; accuracy was particularly poor for the identification of device-associated HAI such as central line associated bloodstream infections. The large heterogeneity in study designs across the included studies precluded formal calculation of summary diagnostic accuracy estimates in most instances. Conclusions Administrative data had limited and highly variable accuracy for the detection of HAI, and their judicious use for internal surveillance efforts and external quality assessment is recommended. If hospitals and policymakers choose to rely on administrative data for HAI surveillance, continued improvements to existing algorithms and their robust validation are imperative. PMID:26316651
Ruusuvuori, Pekka; Aijö, Tarmo; Chowdhury, Sharif; Garmendia-Torres, Cecilia; Selinummi, Jyrki; Birbaumer, Mirko; Dudley, Aimée M; Pelkmans, Lucas; Yli-Harja, Olli
2010-05-13
Several algorithms have been proposed for detecting fluorescently labeled subcellular objects in microscope images. Many of these algorithms have been designed for specific tasks and validated with limited image data. But despite the potential of using extensive comparisons between algorithms to provide useful information to guide method selection and thus more accurate results, relatively few studies have been performed. To better understand algorithm performance under different conditions, we have carried out a comparative study including eleven spot detection or segmentation algorithms from various application fields. We used microscope images from well plate experiments with a human osteosarcoma cell line and frames from image stacks of yeast cells in different focal planes. These experimentally derived images permit a comparison of method performance in realistic situations where the number of objects varies within image set. We also used simulated microscope images in order to compare the methods and validate them against a ground truth reference result. Our study finds major differences in the performance of different algorithms, in terms of both object counts and segmentation accuracies. These results suggest that the selection of detection algorithms for image based screens should be done carefully and take into account different conditions, such as the possibility of acquiring empty images or images with very few spots. Our inclusion of methods that have not been used before in this context broadens the set of available detection methods and compares them against the current state-of-the-art methods for subcellular particle detection.
Ventricular repolarization variability for hypoglycemia detection.
Ling, Steve; Nguyen, H T
2011-01-01
Hypoglycemia is the most acute and common complication of Type 1 diabetes and is a limiting factor in a glycemic management of diabetes. In this paper, two main contributions are presented; firstly, ventricular repolarization variabilities are introduced for hypoglycemia detection, and secondly, a swarm-based support vector machine (SVM) algorithm with the inputs of the repolarization variabilities is developed to detect hypoglycemia. By using the algorithm and including several repolarization variabilities as inputs, the best hypoglycemia detection performance is found with sensitivity and specificity of 82.14% and 60.19%, respectively.
Performance characterization of a combined material identification and screening algorithm
NASA Astrophysics Data System (ADS)
Green, Robert L.; Hargreaves, Michael D.; Gardner, Craig M.
2013-05-01
Portable analytical devices based on a gamut of technologies (Infrared, Raman, X-Ray Fluorescence, Mass Spectrometry, etc.) are now widely available. These tools have seen increasing adoption for field-based assessment by diverse users including military, emergency response, and law enforcement. Frequently, end-users of portable devices are non-scientists who rely on embedded software and the associated algorithms to convert collected data into actionable information. Two classes of problems commonly encountered in field applications are identification and screening. Identification algorithms are designed to scour a library of known materials and determine whether the unknown measurement is consistent with a stored response (or combination of stored responses). Such algorithms can be used to identify a material from many thousands of possible candidates. Screening algorithms evaluate whether at least a subset of features in an unknown measurement correspond to one or more specific substances of interest and are typically configured to detect from a small list potential target analytes. Thus, screening algorithms are much less broadly applicable than identification algorithms; however, they typically provide higher detection rates which makes them attractive for specific applications such as chemical warfare agent or narcotics detection. This paper will present an overview and performance characterization of a combined identification/screening algorithm that has recently been developed. It will be shown that the combined algorithm provides enhanced detection capability more typical of screening algorithms while maintaining a broad identification capability. Additionally, we will highlight how this approach can enable users to incorporate situational awareness during a response.
Lane marking detection based on waveform analysis and CNN
NASA Astrophysics Data System (ADS)
Ye, Yang Yang; Chen, Hou Jin; Hao, Xiao Li
2017-06-01
Lane markings detection is a very important part of the ADAS to avoid traffic accidents. In order to obtain accurate lane markings, in this work, a novel and efficient algorithm is proposed, which analyses the waveform generated from the road image after inverse perspective mapping (IPM). The algorithm includes two main stages: the first stage uses an image preprocessing including a CNN to reduce the background and enhance the lane markings. The second stage obtains the waveform of the road image and analyzes the waveform to get lanes. The contribution of this work is that we introduce local and global features of the waveform to detect the lane markings. The results indicate the proposed method is robust in detecting and fitting the lane markings.
Du, Pan; Kibbe, Warren A; Lin, Simon M
2006-09-01
A major problem for current peak detection algorithms is that noise in mass spectrometry (MS) spectra gives rise to a high rate of false positives. The false positive rate is especially problematic in detecting peaks with low amplitudes. Usually, various baseline correction algorithms and smoothing methods are applied before attempting peak detection. This approach is very sensitive to the amount of smoothing and aggressiveness of the baseline correction, which contribute to making peak detection results inconsistent between runs, instrumentation and analysis methods. Most peak detection algorithms simply identify peaks based on amplitude, ignoring the additional information present in the shape of the peaks in a spectrum. In our experience, 'true' peaks have characteristic shapes, and providing a shape-matching function that provides a 'goodness of fit' coefficient should provide a more robust peak identification method. Based on these observations, a continuous wavelet transform (CWT)-based peak detection algorithm has been devised that identifies peaks with different scales and amplitudes. By transforming the spectrum into wavelet space, the pattern-matching problem is simplified and in addition provides a powerful technique for identifying and separating the signal from the spike noise and colored noise. This transformation, with the additional information provided by the 2D CWT coefficients can greatly enhance the effective signal-to-noise ratio. Furthermore, with this technique no baseline removal or peak smoothing preprocessing steps are required before peak detection, and this improves the robustness of peak detection under a variety of conditions. The algorithm was evaluated with SELDI-TOF spectra with known polypeptide positions. Comparisons with two other popular algorithms were performed. The results show the CWT-based algorithm can identify both strong and weak peaks while keeping false positive rate low. The algorithm is implemented in R and will be included as an open source module in the Bioconductor project.
NASA Astrophysics Data System (ADS)
Ekin, Ahmet; Jasinschi, Radu; van der Grond, Jeroen; van Buchem, Mark A.; van Muiswinkel, Arianne
2006-03-01
This paper introduces image processing methods to automatically detect the 3D volume-of-interest (VOI) and 2D region-of-interest (ROI) for deep gray matter organs (thalamus, globus pallidus, putamen, and caudate nucleus) of patients with suspected iron deposition from MR dual echo images. Prior to the VOI and ROI detection, cerebrospinal fluid (CSF) region is segmented by a clustering algorithm. For the segmentation, we automatically determine the cluster centers with the mean shift algorithm that can quickly identify the modes of a distribution. After the identification of the modes, we employ the K-Harmonic means clustering algorithm to segment the volumetric MR data into CSF and non-CSF. Having the CSF mask and observing that the frontal lobe of the lateral ventricle has more consistent shape accross age and pathological abnormalities, we propose a shape-directed landmark detection algorithm to detect the VOI in a speedy manner. The proposed landmark detection algorithm utilizes a novel shape model of the front lobe of the lateral ventricle for the slices where thalamus, globus pallidus, putamen, and caudate nucleus are expected to appear. After this step, for each slice in the VOI, we use horizontal and vertical projections of the CSF map to detect the approximate locations of the relevant organs to define the ROI. We demonstrate the robustness of the proposed VOI and ROI localization algorithms to pathologies, including severe amounts of iron accumulation as well as white matter lesions, and anatomical variations. The proposed algorithms achieved very high detection accuracy, 100% in the VOI detection , over a large set of a challenging MR dataset.
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey; Mohammed, Priscilla; De Amici, Giovanni; Kim, Edward; Peng, Jinzheng; Ruf, Christopher; Hanna, Maher; Yueh, Simon; Entekhabi, Dara
2016-01-01
The purpose of the Soil Moisture Active Passive (SMAP) radiometer calibration algorithm is to convert Level 0 (L0) radiometer digital counts data into calibrated estimates of brightness temperatures referenced to the Earth's surface within the main beam. The algorithm theory in most respects is similar to what has been developed and implemented for decades for other satellite radiometers; however, SMAP includes two key features heretofore absent from most satellite borne radiometers: radio frequency interference (RFI) detection and mitigation, and measurement of the third and fourth Stokes parameters using digital correlation. The purpose of this document is to describe the SMAP radiometer and forward model, explain the SMAP calibration algorithm, including approximations, errors, and biases, provide all necessary equations for implementing the calibration algorithm and detail the RFI detection and mitigation process. Section 2 provides a summary of algorithm objectives and driving requirements. Section 3 is a description of the instrument and Section 4 covers the forward models, upon which the algorithm is based. Section 5 gives the retrieval algorithm and theory. Section 6 describes the orbit simulator, which implements the forward model and is the key for deriving antenna pattern correction coefficients and testing the overall algorithm.
A Contextual Fire Detection Algorithm for Simulated HJ-1B Imagery.
Qian, Yonggang; Yan, Guangjian; Duan, Sibo; Kong, Xiangsheng
2009-01-01
The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m(2) and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m(2), only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m(2), the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection.
Chung, King
2004-01-01
This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225
Automatic detection of zebra crossings from mobile LiDAR data
NASA Astrophysics Data System (ADS)
Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.
2015-07-01
An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.
Tracks detection from high-orbit space objects
NASA Astrophysics Data System (ADS)
Shumilov, Yu. P.; Vygon, V. G.; Grishin, E. A.; Konoplev, A. O.; Semichev, O. P.; Shargorodskii, V. D.
2017-05-01
The paper presents studies results of a complex algorithm for the detection of highly orbital space objects. Before the implementation of the algorithm, a series of frames with weak tracks of space objects, which can be discrete, is recorded. The algorithm includes pre-processing, classical for astronomy, consistent filtering of each frame and its threshold processing, shear transformation, median filtering of the transformed series of frames, repeated threshold processing and detection decision making. Modeling of space objects weak tracks on of the night starry sky real frames obtained in the regime of a stationary telescope was carried out. It is shown that the permeability of an optoelectronic device has increased by almost 2m.
A cascade method for TFT-LCD defect detection
NASA Astrophysics Data System (ADS)
Yi, Songsong; Wu, Xiaojun; Yu, Zhiyang; Mo, Zhuoya
2017-07-01
In this paper, we propose a novel cascade detection algorithm which focuses on point and line defects on TFT-LCD. At the first step of the algorithm, we use the gray level difference of su-bimage to segment the abnormal area. The second step is based on phase only transform (POT) which corresponds to the Discrete Fourier Transform (DFT), normalized by the magnitude. It can remove regularities like texture and noise. After that, we improve the method of setting regions of interest (ROI) with the method of edge segmentation and polar transformation. The algorithm has outstanding performance in both computation speed and accuracy. It can solve most of the defect detections including dark point, light point, dark line, etc.
Using the time shift in single pushbroom datatakes to detect ships and their heading
NASA Astrophysics Data System (ADS)
Willburger, Katharina A. M.; Schwenk, Kurt
2017-10-01
The detection of ships from remote sensing data has become an essential task for maritime security. The variety of application scenarios includes piracy, illegal fishery, ocean dumping and ships carrying refugees. While techniques using data from SAR sensors for ship detection are widely common, there is only few literature discussing algorithms based on imagery of optical camera systems. A ship detection algorithm for optical pushbroom data has been developed. It takes advantage of the special detector assembly of most of those scanners, which allows apart from the detection of a ship also the calculation of its heading out of a single acquisition. The proposed algorithm for the detection of moving ships was developed with RapidEye imagery. It algorithm consists mainly of three steps: the creation of a land-watermask, the object extraction and the deeper examination of each single object. The latter step is built up by several spectral and geometric filters, making heavy use of the inter-channel displacement typical for pushbroom sensors with multiple CCD lines, finally yielding a set of ships and their direction of movement. The working principle of time-shifted pushbroom sensors and the developed algorithm is explained in detail. Furthermore, we present our first results and give an outlook to future improvements.
A cloud masking algorithm for EARLINET lidar systems
NASA Astrophysics Data System (ADS)
Binietoglou, Ioannis; Baars, Holger; D'Amico, Giuseppe; Nicolae, Doina
2015-04-01
Cloud masking is an important first step in any aerosol lidar processing chain as most data processing algorithms can only be applied on cloud free observations. Up to now, the selection of a cloud-free time interval for data processing is typically performed manually, and this is one of the outstanding problems for automatic processing of lidar data in networks such as EARLINET. In this contribution we present initial developments of a cloud masking algorithm that permits the selection of the appropriate time intervals for lidar data processing based on uncalibrated lidar signals. The algorithm is based on a signal normalization procedure using the range of observed values of lidar returns, designed to work with different lidar systems with minimal user input. This normalization procedure can be applied to measurement periods of only few hours, even if no suitable cloud-free interval exists, and thus can be used even when only a short period of lidar measurements is available. Clouds are detected based on a combination of criteria including the magnitude of the normalized lidar signal and time-space edge detection performed using the Sobel operator. In this way the algorithm avoids misclassification of strong aerosol layers as clouds. Cloud detection is performed using the highest available time and vertical resolution of the lidar signals, allowing the effective detection of low-level clouds (e.g. cumulus humilis). Special attention is given to suppress false cloud detection due to signal noise that can affect the algorithm's performance, especially during day-time. In this contribution we present the details of algorithm, the effect of lidar characteristics (space-time resolution, available wavelengths, signal-to-noise ratio) to detection performance, and highlight the current strengths and limitations of the algorithm using lidar scenes from different lidar systems in different locations across Europe.
Chastek, Benjamin J; Oleen-Burkey, Merrikay; Lopez-Bresnahan, Maria V
2010-01-01
Relapse is a common measure of disease activity in relapsing-remitting multiple sclerosis (MS). The objective of this study was to test the content validity of an operational algorithm for detecting relapse in claims data. A claims-based relapse detection algorithm was tested by comparing its detection rate over a 1-year period with relapses identified based on medical chart review. According to the algorithm, MS patients in a US healthcare claims database who had either (1) a primary claim for MS during hospitalization or (2) a corticosteroid claim following a MS-related outpatient visit were designated as having a relapse. Patient charts were examined for explicit indication of relapse or care suggestive of relapse. Positive and negative predictive values were calculated. Medical charts were reviewed for 300 MS patients, half of whom had a relapse according to the algorithm. The claims-based criteria correctly classified 67.3% of patients with relapses (positive predictive value) and 70.0% of patients without relapses (negative predictive value; kappa 0.373: p < 0.001). Alternative algorithms did not improve on the predictive value of the operational algorithm. Limitations of the algorithm include lack of differentiation between relapsing-remitting MS and other types, and that it does not incorporate measures of function and disability. The claims-based algorithm appeared to successfully detect moderate-to-severe MS relapse. This validated definition can be applied to future claims-based MS studies.
Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Sanchez, Javier; Revie, Crawford W.
2013-01-01
Background Syndromic surveillance research has focused on two main themes: the search for data sources that can provide early disease detection; and the development of efficient algorithms that can detect potential outbreak signals. Methods This work combines three algorithms that have demonstrated solid performance in detecting simulated outbreak signals of varying shapes in time series of laboratory submissions counts. These are: the Shewhart control charts designed to detect sudden spikes in counts; the EWMA control charts developed to detect slow increasing outbreaks; and the Holt-Winters exponential smoothing, which can explicitly account for temporal effects in the data stream monitored. A scoring system to detect and report alarms using these algorithms in a complementary way is proposed. Results The use of multiple algorithms in parallel resulted in increased system sensitivity. Specificity was decreased in simulated data, but the number of false alarms per year when the approach was applied to real data was considered manageable (between 1 and 3 per year for each of ten syndromic groups monitored). The automated implementation of this approach, including a method for on-line filtering of potential outbreak signals is described. Conclusion The developed system provides high sensitivity for detection of potential outbreak signals while also providing robustness and flexibility in establishing what signals constitute an alarm. This flexibility allows an analyst to customize the system for different syndromes. PMID:24349216
Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Sanchez, Javier; Revie, Crawford W
2013-01-01
Syndromic surveillance research has focused on two main themes: the search for data sources that can provide early disease detection; and the development of efficient algorithms that can detect potential outbreak signals. This work combines three algorithms that have demonstrated solid performance in detecting simulated outbreak signals of varying shapes in time series of laboratory submissions counts. These are: the Shewhart control charts designed to detect sudden spikes in counts; the EWMA control charts developed to detect slow increasing outbreaks; and the Holt-Winters exponential smoothing, which can explicitly account for temporal effects in the data stream monitored. A scoring system to detect and report alarms using these algorithms in a complementary way is proposed. The use of multiple algorithms in parallel resulted in increased system sensitivity. Specificity was decreased in simulated data, but the number of false alarms per year when the approach was applied to real data was considered manageable (between 1 and 3 per year for each of ten syndromic groups monitored). The automated implementation of this approach, including a method for on-line filtering of potential outbreak signals is described. The developed system provides high sensitivity for detection of potential outbreak signals while also providing robustness and flexibility in establishing what signals constitute an alarm. This flexibility allows an analyst to customize the system for different syndromes.
Pourghassem, Hossein
2012-01-01
Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.
Tachycardia detection in ICDs by Boston Scientific : Algorithms, pearls, and pitfalls.
Zanker, Norbert; Schuster, Diane; Gilkerson, James; Stein, Kenneth
2016-09-01
The aim of this study was to summarize how implantable cardioverter defibrillators (ICDs) by Boston Scientific sense, detect, discriminate rhythms, and classify episodes. Modern devices include multiple programming selections, diagnostic features, therapy options, memory functions, and device-related history features. Device operation includes logical steps from sensing, detection, discrimination, therapy delivery to history recording. The program is designed to facilitate the application of the device algorithms to the individual patient's clinical needs. Features and functions described in this article represent a selective excerpt by the authors from Boston Scientific publicly available product resources. Programming of ICDs may affect patient outcomes. Patient-adapted and optimized programming requires understanding of device operation and concepts.
NASA Astrophysics Data System (ADS)
Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.
2016-10-01
Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.
Data Products From Particle Detectors On-Board NOAA's Newest Space Weather Monitor
NASA Astrophysics Data System (ADS)
Kress, B. T.; Rodriguez, J. V.; Onsager, T. G.
2017-12-01
NOAA's newest Geostationary Operational Environmental Satellite, GOES-16, was launched on 19 November 2016. Instrumentation on-board GOES-16 includes the new Space Environment In-Situ Suite (SEISS), which has been collecting data since 8 January 2017. SEISS is composed of five magnetospheric particle sensor units: an electrostatic analyzer for measuring 30 eV - 30 keV ions and electrons (MPS-LO), a high energy particle sensor (MPS-HI) that measures keV to MeV electrons and protons, east and west facing Solar and Galactic Proton Sensor (SGPS) units with 13 differential channels between 1-500 MeV, and an Energetic Heavy Ion Sensor (EHIS) that measures 30 species of heavy ions (He-Ni) in five energy bands in the 10-200 MeV/nuc range. Measurement of low energy magnetospheric particles by MPS-LO and heavy ions by EHIS are new capabilities not previously flown on the GOES system. Real-time data from GOES-16 will support space weather monitoring and first-principles space weather modeling by NOAA's Space Weather Prediction Center (SWPC). Space weather level 2+ data products under development at NOAA's National Centers for Environmental Information (NCEI) include the Solar Energetic Particle (SEP) Event Detection algorithm. Legacy components of the SEP event detection algorithm (currently produced by SWPC) include the Solar Radiation Storm Scales. New components will include, e.g., event fluences. New level 2+ data products also include the SEP event Linear Energy Transfer (LET) Algorithm, for transforming energy spectra from EHIS into LET spectra, and the Density and Temperature Moments and Spacecraft Charging algorithm. The moments and charging algorithm identifies electron and ion signatures of spacecraft surface (frame) charging in the MPS-LO fluxes. Densities and temperatures from MPS-LO will also be used to support a magnetopause crossing detection algorithm. The new data products will provide real-time indicators of potential radiation hazards for the satellite community and data for future studies of space weather effects. This presentation will include an overview of these algorithms and examples of their performance during recent co-rotation interaction region (CIR) associated radiation belt enhancements and a solar particle event on 14-15 July 2017.
The Chandra Source Catalog: Algorithms
NASA Astrophysics Data System (ADS)
McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.
Development of anomaly detection models for deep subsurface monitoring
NASA Astrophysics Data System (ADS)
Sun, A. Y.
2017-12-01
Deep subsurface repositories are used for waste disposal and carbon sequestration. Monitoring deep subsurface repositories for potential anomalies is challenging, not only because the number of sensor networks and the quality of data are often limited, but also because of the lack of labeled data needed to train and validate machine learning (ML) algorithms. Although physical simulation models may be applied to predict anomalies (or the system's nominal state for that sake), the accuracy of such predictions may be limited by inherent conceptual and parameter uncertainties. The main objective of this study was to demonstrate the potential of data-driven models for leakage detection in carbon sequestration repositories. Monitoring data collected during an artificial CO2 release test at a carbon sequestration repository were used, which include both scalar time series (pressure) and vector time series (distributed temperature sensing). For each type of data, separate online anomaly detection algorithms were developed using the baseline experiment data (no leak) and then tested on the leak experiment data. Performance of a number of different online algorithms was compared. Results show the importance of including contextual information in the dataset to mitigate the impact of reservoir noise and reduce false positive rate. The developed algorithms were integrated into a generic Web-based platform for real-time anomaly detection.
Processing LiDAR Data to Predict Natural Hazards
NASA Technical Reports Server (NTRS)
Fairweather, Ian; Crabtree, Robert; Hager, Stacey
2008-01-01
ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.
Kotiadis, D; Hermens, H J; Veltink, P H
2010-05-01
An Inertial Gait Phase Detection system was developed to replace heel switches and footswitches currently being used for the triggering of drop foot stimulators. A series of four algorithms utilising accelerometers and gyroscopes individually and in combination were tested and initial results are shown. Sensors were positioned on the outside of the upper shank. Tests were performed on data gathered from a subject, sufferer of stroke, implanted with a drop foot stimulator and triggered with the current trigger, the heel switch. Data tested includes a variety of activities representing everyday life. Flat surface walking, rough terrain and carpet walking show 100% detection and the ability of the algorithms to ignore non-gait events such as weight shifts. Timing analysis is performed against the current triggering method, the heel switch. After evaluating the heel switch timing against a reference system, namely the Vicon 370 marker and force plates system. Initial results show a close correlation between the current trigger detection and the inertial sensor based triggering algorithms. Algorithms were tested for stairs up and stairs down. Best results are observed for algorithms using gyroscope data. Algorithms were designed using threshold techniques for lowest possible computational load and with least possible sensor components to minimize power requirements and to allow for potential future implantation of sensor system.
A Contextual Fire Detection Algorithm for Simulated HJ-1B Imagery
Qian, Yonggang; Yan, Guangjian; Duan, Sibo; Kong, Xiangsheng
2009-01-01
The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m2 and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m2, only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m2, the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection. PMID:22399950
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-06-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.
Comparison of human and algorithmic target detection in passive infrared imagery
NASA Astrophysics Data System (ADS)
Weber, Bruce A.; Hutchinson, Meredith
2003-09-01
We have designed an experiment that compares the performance of human observers and a scale-insensitive target detection algorithm that uses pixel level information for the detection of ground targets in passive infrared imagery. The test database contains targets near clutter whose detectability ranged from easy to very difficult. Results indicate that human observers detect more "easy-to-detect" targets, and with far fewer false alarms, than the algorithm. For "difficult-to-detect" targets, human and algorithm detection rates are considerably degraded, and algorithm false alarms excessive. Analysis of detections as a function of observer confidence shows that algorithm confidence attribution does not correspond to human attribution, and does not adequately correlate with correct detections. The best target detection score for any human observer was 84%, as compared to 55% for the algorithm for the same false alarm rate. At 81%, the maximum detection score for the algorithm, the same human observer had 6 false alarms per frame as compared to 29 for the algorithm. Detector ROC curves and observer-confidence analysis benchmarks the algorithm and provides insights into algorithm deficiencies and possible paths to improvement.
Automated detection of diabetic retinopathy on digital fundus images.
Sinthanayothin, C; Boyce, J F; Williamson, T H; Cook, H L; Mensah, E; Lal, S; Usher, D
2002-02-01
The aim was to develop an automated screening system to analyse digital colour retinal images for important features of non-proliferative diabetic retinopathy (NPDR). High performance pre-processing of the colour images was performed. Previously described automated image analysis systems were used to detect major landmarks of the retinal image (optic disc, blood vessels and fovea). Recursive region growing segmentation algorithms combined with the use of a new technique, termed a 'Moat Operator', were used to automatically detect features of NPDR. These features included haemorrhages and microaneurysms (HMA), which were treated as one group, and hard exudates as another group. Sensitivity and specificity data were calculated by comparison with an experienced fundoscopist. The algorithm for exudate recognition was applied to 30 retinal images of which 21 contained exudates and nine were without pathology. The sensitivity and specificity for exudate detection were 88.5% and 99.7%, respectively, when compared with the ophthalmologist. HMA were present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and specificity of 88.7% for detection of HMA. Fully automated computer algorithms were able to detect hard exudates and HMA. This paper presents encouraging results in automatic identification of important features of NPDR.
Dessimoz, Christophe; Boeckmann, Brigitte; Roth, Alexander C J; Gonnet, Gaston H
2006-01-01
Correct orthology assignment is a critical prerequisite of numerous comparative genomics procedures, such as function prediction, construction of phylogenetic species trees and genome rearrangement analysis. We present an algorithm for the detection of non-orthologs that arise by mistake in current orthology classification methods based on genome-specific best hits, such as the COGs database. The algorithm works with pairwise distance estimates, rather than computationally expensive and error-prone tree-building methods. The accuracy of the algorithm is evaluated through verification of the distribution of predicted cases, case-by-case phylogenetic analysis and comparisons with predictions from other projects using independent methods. Our results show that a very significant fraction of the COG groups include non-orthologs: using conservative parameters, the algorithm detects non-orthology in a third of all COG groups. Consequently, sequence analysis sensitive to correct orthology assignments will greatly benefit from these findings.
Autoregressive statistical pattern recognition algorithms for damage detection in civil structures
NASA Astrophysics Data System (ADS)
Yao, Ruigen; Pakzad, Shamim N.
2012-08-01
Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.
The Use of Meteorlogical Data to Improve Contrail Detection in Thermal Imagery over Ireland.
NASA Technical Reports Server (NTRS)
Whelan, Gillian M.; Cawkwell, Fiona; Mannstein, Hermann; Minnis, Patrick
2009-01-01
Aircraft induced contrails have been found to have a net warming influence on the climate system, with strong regional dependence. Persistent linear contrails are detectable in 1 Km thermal imagery and, using an automated Contrail Detection Algorithm (CDA), can be identified on the basis of their different properties at the 11 and 12 m w av.el enTgthshe algorithm s ability to distinguish contrails from other linear features depends on the sensitivity of its tuning parameters. In order to keep the number of false identifications low, the algorithm imposes strict limits on contrail size, linearity and intensity. This paper investigates whether including additional information (i.e. meteorological data) within the CDA may allow for these criteria to be less rigorous, thus increasing the contrail-detection rate, without increasing the false alarm rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
An Algorithm to Detect the Retinal Region of Interest
NASA Astrophysics Data System (ADS)
Şehirli, E.; Turan, M. K.; Demiral, E.
2017-11-01
Retina is one of the important layers of the eyes, which includes sensitive cells to colour and light and nerve fibers. Retina can be displayed by using some medical devices such as fundus camera, ophthalmoscope. Hence, some lesions like microaneurysm, haemorrhage, exudate with many diseases of the eye can be detected by looking at the images taken by devices. In computer vision and biomedical areas, studies to detect lesions of the eyes automatically have been done for a long time. In order to make automated detections, the concept of ROI may be utilized. ROI which stands for region of interest generally serves the purpose of focusing on particular targets. The main concentration of this paper is the algorithm to automatically detect retinal region of interest belonging to different retinal images on a software application. The algorithm consists of three stages such as pre-processing stage, detecting ROI on processed images and overlapping between input image and obtained ROI of the image.
Discovering the Unknown: Improving Detection of Novel Species and Genera from Short Reads
Rosen, Gail L.; Polikar, Robi; Caseiro, Diamantino A.; ...
2011-01-01
High-throughput sequencing technologies enable metagenome profiling, simultaneous sequencing of multiple microbial species present within an environmental sample. Since metagenomic data includes sequence fragments (“reads”) from organisms that are absent from any database, new algorithms must be developed for the identification and annotation of novel sequence fragments. Homology-based techniques have been modified to detect novel species and genera, but, composition-based methods, have not been adapted. We develop a detection technique that can discriminate between “known” and “unknown” taxa, which can be used with composition-based methods, as well as a hybrid method. Unlike previous studies, we rigorously evaluate all algorithms for theirmore » ability to detect novel taxa. First, we show that the integration of a detector with a composition-based method performs significantly better than homology-based methods for the detection of novel species and genera, with best performance at finer taxonomic resolutions. Most importantly, we evaluate all the algorithms by introducing an “unknown” class and show that the modified version of PhymmBL has similar or better overall classification performance than the other modified algorithms, especially for the species-level and ultrashort reads. Finally, we evaluate theperformance of several algorithms on a real acid mine drainage dataset.« less
Automatic detection of artifacts in converted S3D video
NASA Astrophysics Data System (ADS)
Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail
2014-03-01
In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.
2013-09-01
We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.
NASA Astrophysics Data System (ADS)
Skoumal, R.; Brudzinski, M.; Currie, B.
2015-12-01
Induced seismic sequences often occur as swarms that can include thousands of small (< M 2) earthquakes. While the identification of this microseismicity would invariably aid in the characterization and modeling of induced sequences, traditional earthquake detection techniques often provide incomplete catalogs, even when local networks are deployed. Because induced sequences often include scores of micro-earthquakes that prelude larger magnitude events, the identification of these small magnitude events would be crucial for the early identification of induced sequences. By taking advantage of the repeating, swarm-like nature of induced seismicity, a more robust catalog can be created using complementary correlation algorithms in near real-time without the reliance on traditional earthquake detection and association routines. Since traditional earthquake catalog methodologies using regional networks have a relatively high detection threshold (M 2+), we have sought to develop correlation routines that can detect smaller magnitude sequences. While short-term/long-term amplitude average detection algorithms requires significant signal-to-noise ratio at multiple stations for confident identification, a correlation detector is capable of identifying earthquakes with high confidence using just a single station. The result is an embarrassingly parallel task that can be employed for a network to be used as an early warning system for potentially induced seismicity while also better characterizing tectonic sequences beyond what traditional methods allow.
Tele-operated search robot for human detection using histogram of oriented objects
NASA Astrophysics Data System (ADS)
Cruz, Febus Reidj G.; Avendaño, Glenn O.; Manlises, Cyrel O.; Avellanosa, James Jason G.; Abina, Jyacinth Camille F.; Masaquel, Albert M.; Siapno, Michael Lance O.; Chung, Wen-Yaw
2017-02-01
Disasters such as typhoons, tornadoes, and earthquakes are inevitable. Aftermaths of these disasters include the missing people. Using robots with human detection capabilities to locate the missing people, can dramatically reduce the harm and risk to those who work in such circumstances. This study aims to: design and build a tele-operated robot; implement in MATLAB an algorithm for the detection of humans; and create a database of human identification based on various positions, angles, light intensity, as well as distances from which humans will be identified. Different light intensities were made by using Photoshop to simulate smoke, dust and water drops conditions. After processing the image, the system can indicate either a human is detected or not detected. Testing with bodies covered was also conducted to test the algorithm's robustness. Based on the results, the algorithm can detect humans with full body shown. For upright and lying positions, detection can happen from 8 feet to 20 feet. For sitting position, detection can happen from 2 feet to 20 feet with slight variances in results because of different lighting conditions. The distances greater than 20 feet, no humans can be processed or false negatives can occur. For bodies covered, the algorithm can detect humans in cases made under given circumstances. On three positions, humans can be detected from 0 degrees to 180 degrees under normal, with smoke, with dust, and with water droplet conditions. This study was able to design and build a tele-operated robot with MATLAB algorithm that can detect humans with an overall precision of 88.30%, from which a database was created for human identification based on various conditions, where humans will be identified.
Load power device and system for real-time execution of hierarchical load identification algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yi; Madane, Mayura Arun; Zambare, Prachi Suresh
A load power device includes a power input; at least one power output for at least one load; and a plurality of sensors structured to sense voltage and current at the at least one power output. A processor is structured to provide real-time execution of: (a) a plurality of load identification algorithms, and (b) event detection and operating mode detection for the at least one load.
Optimization of a chemical identification algorithm
NASA Astrophysics Data System (ADS)
Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren
2010-04-01
A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.
NASA Astrophysics Data System (ADS)
Duncan, D.; Kummerow, C. D.; Meier, W.
2016-12-01
Over the lifetime of AMSR-E, operational retrieval algorithms were developed and run for precipitation, ocean suite (SST, wind speed, cloud liquid water path, and column water vapor over ocean), sea ice, snow water equivalent, and soil moisture. With a separate algorithm for each group, the retrievals were never interactive or integrated in any way despite many co-sensitivities. AMSR2, the follow-on mission to AMSR-E, retrieves the same parameters at a slightly higher spatial resolution. We have combined the operational algorithms for AMSR2 in a way that facilitates sharing information between the retrievals. Difficulties that arose were mainly related to calibration, spatial resolution, coastlines, and order of processing. The integration of all algorithms for AMSR2 has numerous benefits, including better detection of light precipitation and sea ice, fewer screened out pixels, and better quality flags. Integrating the algorithms opens up avenues for investigating the limits of detectability for precipitation from a passive microwave radiometer and the impact of spatial resolution on sea ice edge detection; these are investigated using CloudSat and MODIS coincident observations from the A-Train constellation.
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.
1986-01-01
The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.
Daytime sea fog retrieval based on GOCI data: a case study over the Yellow Sea.
Yuan, Yibo; Qiu, Zhongfeng; Sun, Deyong; Wang, Shengqiang; Yue, Xiaoyuan
2016-01-25
In this paper, a new daytime sea fog detection algorithm has been developed by using Geostationary Ocean Color Imager (GOCI) data. Based on spectral analysis, differences in spectral characteristics were found over different underlying surfaces, which include land, sea, middle/high level clouds, stratus clouds and sea fog. Statistical analysis showed that the Rrc (412 nm) (Rayleigh Corrected Reflectance) of sea fog pixels is approximately 0.1-0.6. Similarly, various band combinations could be used to separate different surfaces. Therefore, three indices (SLDI, MCDI and BSI) were set to discern land/sea, middle/high level clouds and fog/stratus clouds, respectively, from which it was generally easy to extract fog pixels. The remote sensing algorithm was verified using coastal sounding data, which demonstrated that the algorithm had the ability to detect sea fog. The algorithm was then used to monitor an 8-hour sea fog event and the results were consistent with observational data from buoys data deployed near the Sheyang coast (121°E, 34°N). The goal of this study was to establish a daytime sea fog detection algorithm based on GOCI data, which shows promise for detecting fog separately from stratus.
Information dynamics algorithm for detecting communities in networks
NASA Astrophysics Data System (ADS)
Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro
2012-11-01
The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.
An on-board pedestrian detection and warning system with features of side pedestrian
NASA Astrophysics Data System (ADS)
Cheng, Ruzhong; Zhao, Yong; Wong, ChupChung; Chan, KwokPo; Xu, Jiayao; Wang, Xin'an
2012-01-01
Automotive Active Safety(AAS) is the main branch of intelligence automobile study and pedestrian detection is the key problem of AAS, because it is related with the casualties of most vehicle accidents. For on-board pedestrian detection algorithms, the main problem is to balance efficiency and accuracy to make the on-board system available in real scenes, so an on-board pedestrian detection and warning system with the algorithm considered the features of side pedestrian is proposed. The system includes two modules, pedestrian detecting and warning module. Haar feature and a cascade of stage classifiers trained by Adaboost are first applied, and then HOG feature and SVM classifier are used to refine false positives. To make these time-consuming algorithms available in real-time use, a divide-window method together with operator context scanning(OCS) method are applied to increase efficiency. To merge the velocity information of the automotive, the distance of the detected pedestrian is also obtained, so the system could judge if there is a potential danger for the pedestrian in the front. With a new dataset captured in urban environment with side pedestrians on zebra, the embedded system and its algorithm perform an on-board available result on side pedestrian detection.
Top-attack modeling and automatic target detection using synthetic FLIR scenery
NASA Astrophysics Data System (ADS)
Weber, Bruce A.; Penn, Joseph A.
2004-09-01
A series of experiments have been performed to verify the utility of algorithmic tools for the modeling and analysis of cold-target signatures in synthetic, top-attack, FLIR video sequences. The tools include: MuSES/CREATION for the creation of synthetic imagery with targets, an ARL target detection algorithm to detect imbedded synthetic targets in scenes, and an ARL scoring algorithm, using Receiver-Operating-Characteristic (ROC) curve analysis, to evaluate detector performance. Cold-target detection variability was examined as a function of target emissivity, surrounding clutter type, and target placement in non-obscuring clutter locations. Detector metrics were also individually scored so as to characterize the effect of signature/clutter variations. Results show that using these tools, a detailed, physically meaningful, target detection analysis is possible and that scenario specific target detectors may be developed by selective choice and/or weighting of detector metrics. However, developing these tools into a reliable predictive capability will require the extension of these results to the modeling and analysis of a large number of data sets configured for a wide range of target and clutter conditions. Finally, these tools should also be useful for the comparison of competitive detection algorithms by providing well defined, and controllable target detection scenarios, as well as for the training and testing of expert human observers.
Automatic Solitary Lung Nodule Detection in Computed Tomography Images Slices
NASA Astrophysics Data System (ADS)
Sentana, I. W. B.; Jawas, N.; Asri, S. A.
2018-01-01
Lung nodule is an early indicator of some lung diseases, including lung cancer. In Computed Tomography (CT) based image, nodule is known as a shape that appears brighter than lung surrounding. This research aim to develop an application that automatically detect lung nodule in CT images. There are some steps in algorithm such as image acquisition and conversion, image binarization, lung segmentation, blob detection, and classification. Data acquisition is a step to taking image slice by slice from the original *.dicom format and then each image slices is converted into *.tif image format. Binarization that tailoring Otsu algorithm, than separated the background and foreground part of each image slices. After removing the background part, the next step is to segment part of the lung only so the nodule can localized easier. Once again Otsu algorithm is use to detect nodule blob in localized lung area. The final step is tailoring Support Vector Machine (SVM) to classify the nodule. The application has succeed detecting near round nodule with a certain threshold of size. Those detecting result shows drawback in part of thresholding size and shape of nodule that need to enhance in the next part of the research. The algorithm also cannot detect nodule that attached to wall and Lung Chanel, since it depend the searching only on colour differences.
Tactical Conflict Detection in Terminal Airspace
NASA Technical Reports Server (NTRS)
Tang, Huabin; Robinson, John E.; Denery, Dallas G.
2010-01-01
Air traffic systems have long relied on automated short-term conflict prediction algorithms to warn controllers of impending conflicts (losses of separation). The complexity of terminal airspace has proven difficult for such systems as it often leads to excessive false alerts. Thus, the legacy system, called Conflict Alert, which provides short-term alerts in both en-route and terminal airspace currently, is often inhibited or degraded in areas where frequent false alerts occur, even though the alerts are provided only when an aircraft is in dangerous proximity of other aircraft. This research investigates how a minimal level of flight intent information may be used to improve short-term conflict detection in terminal airspace such that it can be used by the controller to maintain legal aircraft separation. The flight intent information includes a site-specific nominal arrival route and inferred altitude clearances in addition to the flight plan that includes the RNAV (Area Navigation) departure route. A new tactical conflict detection algorithm is proposed, which uses a single analytic trajectory, determined by the flight intent and the current state information of the aircraft, and includes a complex set of current, dynamic separation standards for terminal airspace to define losses of separation. The new algorithm is compared with an algorithm that imitates a known en-route algorithm and another that imitates Conflict Alert by analysis of false-alert rate and alert lead time with recent real-world data of arrival and departure operations and a large set of operational error cases from Dallas/Fort Worth TRACON (Terminal Radar Approach Control). The new algorithm yielded a false-alert rate of two per hour and an average alert lead time of 38 seconds.
Ten Years of Cloud Optical and Microphysical Retrievals from MODIS
NASA Technical Reports Server (NTRS)
Platnick, Steven; King, Michael D.; Wind, Galina; Hubanks, Paul; Arnold, G. Thomas; Amarasinghe, Nandana
2010-01-01
The MODIS cloud optical properties algorithm (MOD06/MYD06 for Terra and Aqua MODIS, respectively) has undergone extensive improvements and enhancements since the launch of Terra. These changes have included: improvements in the cloud thermodynamic phase algorithm; substantial changes in the ice cloud light scattering look up tables (LUTs); a clear-sky restoral algorithm for flagging heavy aerosol and sunglint; greatly improved spectral surface albedo maps, including the spectral albedo of snow by ecosystem; inclusion of pixel-level uncertainty estimates for cloud optical thickness, effective radius, and water path derived for three error sources that includes the sensitivity of the retrievals to solar and viewing geometries. To improve overall retrieval quality, we have also implemented cloud edge removal and partly cloudy detection (using MOD35 cloud mask 250m tests), added a supplementary cloud optical thickness and effective radius algorithm over snow and sea ice surfaces and over the ocean, which enables comparison with the "standard" 2.1 11m effective radius retrieval, and added a multi-layer cloud detection algorithm. We will discuss the status of the MOD06 algorithm and show examples of pixellevel (Level-2) cloud retrievals for selected data granules, as well as gridded (Level-3) statistics, notably monthly means and histograms (lD and 2D, with the latter giving correlations between cloud optical thickness and effective radius, and other cloud product pairs).
Overhead longwave infrared hyperspectral material identification using radiometric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelinski, M. E.
Material detection algorithms used in hyperspectral data processing are computationally efficient but can produce relatively high numbers of false positives. Material identification performed as a secondary processing step on detected pixels can help separate true and false positives. This paper presents a material identification processing chain for longwave infrared hyperspectral data of solid materials collected from airborne platforms. The algorithms utilize unwhitened radiance data and an iterative algorithm that determines the temperature, humidity, and ozone of the atmospheric profile. Pixel unmixing is done using constrained linear regression and Bayesian Information Criteria for model selection. The resulting product includes an optimalmore » atmospheric profile and full radiance material model that includes material temperature, abundance values, and several fit statistics. A logistic regression method utilizing all model parameters to improve identification is also presented. This paper details the processing chain and provides justification for the algorithms used. Several examples are provided using modeled data at different noise levels.« less
Detections of Propellers in Saturn's Rings using Machine Learning: Preliminary Results
NASA Astrophysics Data System (ADS)
Gordon, Mitchell K.; Showalter, Mark R.; Odess, Jennifer; Del Villar, Ambi; LaMora, Andy; Paik, Jin; Lakhani, Karim; Sergeev, Rinat; Erickson, Kristen; Galica, Carol; Grayzeck, Edwin; Morgan, Thomas; Knopf, William
2015-11-01
We report on the initial analysis of the output of a tool designed to identify persistent, non-axisymmetric features in the rings of Saturn. This project introduces a new paradigm for scientific software development. The preliminary results include what appear to be new detections of propellers in the rings of Saturn.The Planetary Data System (PDS), working with the NASA Tournament Lab (NTL), Crowd Innovation Lab at Harvard University, and the Topcoder community at Appirio, Inc., under the umbrella “Cassini Rings Challenge”, sponsored a set of competitions employing crowd sourcing and machine learning to develop a tool which could be made available to the community at large. The Challenge was tackled by running a series of separate contests to solve individual tasks prior to the major machine learning challenge. Each contest was comprised of a set of requirements, a timeline, one or more prizes, and other incentives, and was posted by Appirio to the Topcoder Community. In the case of the machine learning challenge (a “Marathon Challenge” on the Topcoder platform), members competed against each other by submitting solutions that were scored in real time and posted to a public leader-board by a scoring algorithm developed by Appirio for this contest.The current version of the algorithm was run against ~30,000 of the highest resolution Cassini ISS images. That set included 668 images with a total of 786 features previously identified as propellers in the main rings. The tool identified 81% of those previously identified propellers. In a preliminary, close examination of 130 detections identified by the tool, we determined that of the 130 detections, 11 were previously identified propeller detections, 5 appear to be new detections of known propellers, and 4 appear to be detections of propellers which have not been seen previously. A total of 20 valid detections from 130 candidates implies a relatively high false positive rate which we hope to reduce by further algorithm development. The machine learning aspect of the algorithm means that as our set of verified detections increases so does the pool of “ground-truth” data used to train the algorithm for future use.
Hassett, Michael J; Uno, Hajime; Cronin, Angel M; Carroll, Nikki M; Hornbrook, Mark C; Ritzwoller, Debra
2017-12-01
Recurrent cancer is common, costly, and lethal, yet we know little about it in community-based populations. Electronic health records and tumor registries contain vast amounts of data regarding community-based patients, but usually lack recurrence status. Existing algorithms that use structured data to detect recurrence have limitations. We developed algorithms to detect the presence and timing of recurrence after definitive therapy for stages I-III lung and colorectal cancer using 2 data sources that contain a widely available type of structured data (claims or electronic health record encounters) linked to gold-standard recurrence status: Medicare claims linked to the Cancer Care Outcomes Research and Surveillance study, and the Cancer Research Network Virtual Data Warehouse linked to registry data. Twelve potential indicators of recurrence were used to develop separate models for each cancer in each data source. Detection models maximized area under the ROC curve (AUC); timing models minimized average absolute error. Algorithms were compared by cancer type/data source, and contrasted with an existing binary detection rule. Detection model AUCs (>0.92) exceeded existing prediction rules. Timing models yielded absolute prediction errors that were small relative to follow-up time (<15%). Similar covariates were included in all detection and timing algorithms, though differences by cancer type and dataset challenged efforts to create 1 common algorithm for all scenarios. Valid and reliable detection of recurrence using big data is feasible. These tools will enable extensive, novel research on quality, effectiveness, and outcomes for lung and colorectal cancer patients and those who develop recurrence.
NASA Astrophysics Data System (ADS)
Behzad, Mehdi; Ghadami, Amin; Maghsoodi, Ameneh; Michael Hale, Jack
2013-11-01
In this paper, a simple method for detection of multiple edge cracks in Euler-Bernoulli beams having two different types of cracks is presented based on energy equations. Each crack is modeled as a massless rotational spring using Linear Elastic Fracture Mechanics (LEFM) theory, and a relationship among natural frequencies, crack locations and stiffness of equivalent springs is demonstrated. In the procedure, for detection of m cracks in a beam, 3m equations and natural frequencies of healthy and cracked beam in two different directions are needed as input to the algorithm. The main accomplishment of the presented algorithm is the capability to detect the location, severity and type of each crack in a multi-cracked beam. Concise and simple calculations along with accuracy are other advantages of this method. A number of numerical examples for cantilever beams including one and two cracks are presented to validate the method.
Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications
NASA Astrophysics Data System (ADS)
He, K.; Zhu, W. D.
2011-07-01
A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.
Wang, San-Yuan; Kuo, Ching-Hua; Tseng, Yufeng J
2015-03-03
Able to detect known and unknown metabolites, untargeted metabolomics has shown great potential in identifying novel biomarkers. However, elucidating all possible liquid chromatography/time-of-flight mass spectrometry (LC/TOF-MS) ion signals in a complex biological sample remains challenging since many ions are not the products of metabolites. Methods of reducing ions not related to metabolites or simply directly detecting metabolite related (pure) ions are important. In this work, we describe PITracer, a novel algorithm that accurately detects the pure ions of a LC/TOF-MS profile to extract pure ion chromatograms and detect chromatographic peaks. PITracer estimates the relative mass difference tolerance of ions and calibrates the mass over charge (m/z) values for peak detection algorithms with an additional option to further mass correction with respect to a user-specified metabolite. PITracer was evaluated using two data sets containing 373 human metabolite standards, including 5 saturated standards considered to be split peaks resultant from huge m/z fluctuation, and 12 urine samples spiked with 50 forensic drugs of varying concentrations. Analysis of these data sets show that PITracer correctly outperformed existing state-of-art algorithm and extracted the pure ion chromatograms of the 5 saturated standards without generating split peaks and detected the forensic drugs with high recall, precision, and F-score and small mass error.
A two-stage algorithm for Clostridium difficile including PCR: can we replace the toxin EIA?
Orendi, J M; Monnery, D J; Manzoor, S; Hawkey, P M
2012-01-01
A two step, three-test algorithm for Clostridium difficile infection (CDI) was reviewed. Stool samples were tested by enzyme immunoassays for C. difficile common antigen glutamate dehydrogenase (G) and toxin A/B (T). Samples with discordant results were tested by polymerase chain reaction detecting the toxin B gene (P). The algorithm quickly identified patients with detectable toxin A/B, whereas a large group of patients excreting toxigenic C. difficile but with toxin A/B production below detection level (G(+)T(-)P(+)) was identified separately. The average white blood cell count in patients with a G(+)T(+) result was higher than in those with a G(+)T(-)P(+) result. Copyright © 2011 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Machine Learning for Biological Trajectory Classification Applications
NASA Technical Reports Server (NTRS)
Sbalzarini, Ivo F.; Theriot, Julie; Koumoutsakos, Petros
2002-01-01
Machine-learning techniques, including clustering algorithms, support vector machines and hidden Markov models, are applied to the task of classifying trajectories of moving keratocyte cells. The different algorithms axe compared to each other as well as to expert and non-expert test persons, using concepts from signal-detection theory. The algorithms performed very well as compared to humans, suggesting a robust tool for trajectory classification in biological applications.
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2015-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
NASA Astrophysics Data System (ADS)
Levesque, M.
Artificial satellites, and particularly space junk, drift continuously from their known orbits. In the surveillance-of-space context, they must be observed frequently to ensure that the corresponding orbital parameter database entries are up-to-date. Autonomous ground-based optical systems are periodically tasked to observe these objects, calculate the difference between their predicted and real positions and update object orbital parameters. The real satellite positions are provided by the detection of the satellite streaks in the astronomical images specifically acquired for this purpose. This paper presents the image processing techniques used to detect and extract the satellite positions. The methodology includes several processing steps including: image background estimation and removal, star detection and removal, an iterative matched filter for streak detection, and finally false alarm rejection algorithms. This detection methodology is able to detect very faint objects. Simulated data were used to evaluate the methodology's performance and determine the sensitivity limits where the algorithm can perform detection without false alarm, which is essential to avoid corruption of the orbital parameter database.
Using SPOT–5 HRG Data in Panchromatic Mode for Operational Detection of Small Ships in Tropical Area
Corbane, Christina; Marre, Fabrice; Petit, Michel
2008-01-01
Nowadays, there is a growing interest in applications of space remote sensing systems for maritime surveillance which includes among others traffic surveillance, maritime security, illegal fisheries survey, oil discharge and sea pollution monitoring. Within the framework of several French and European projects, an algorithm for automatic ship detection from SPOT–5 HRG data was developed to complement existing fishery control measures, in particular the Vessel Monitoring System. The algorithm focused on feature–based analysis of satellite imagery. Genetic algorithms and Neural Networks were used to deal with the feature–borne information. Based on the described approach, a first prototype was designed to classify small targets such as shrimp boats and tested on panchromatic SPOT–5, 5–m resolution product taking into account the environmental and fishing context. The ability to detect shrimp boats with satisfactory detection rates is an indicator of the robustness of the algorithm. Still, the benchmark revealed problems related to increased false alarm rates on particular types of images with a high percentage of cloud cover and a sea cluttered background. PMID:27879859
An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm.
Qin, Qin; Li, Jianqing; Yue, Yinggao; Liu, Chengyu
2017-01-01
R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method.
An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm
Qin, Qin
2017-01-01
R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method. PMID:29104745
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-01-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474
NASA Astrophysics Data System (ADS)
Reichman, Daniël.; Collins, Leslie M.; Malof, Jordan M.
2018-04-01
This work focuses on the development of automatic buried threat detection (BTD) algorithms using ground penetrating radar (GPR) data. Buried threats tend to exhibit unique characteristics in GPR imagery, such as high energy hyperbolic shapes, which can be leveraged for detection. Many recent BTD algorithms are supervised, and therefore they require training with exemplars of GPR data collected over non-threat locations and threat locations, respectively. Frequently, data from non-threat GPR examples will exhibit high energy hyperbolic patterns, similar to those observed from a buried threat. Is it still useful therefore, to include such examples during algorithm training, and encourage an algorithm to label such data as a non-threat? Similarly, some true buried threat examples exhibit very little distinctive threat-like patterns. We investigate whether it is beneficial to treat such GPR data examples as mislabeled, and either (i) relabel them, or (ii) remove them from training. We study this problem using two algorithms to automatically identify mislabeled examples, if they are present, and examine the impact of removing or relabeling them for training. We conduct these experiments on a large collection of GPR data with several state-of-the-art GPR-based BTD algorithms.
Multilayer Cloud Detection with the MODIS Near-Infrared Water Vapor Absorption Band
NASA Technical Reports Server (NTRS)
Wind, Galina; Platnick, Steven; King, Michael D.; Hubanks, Paul A,; Pavolonis, Michael J.; Heidinger, Andrew K.; Yang, Ping; Baum, Bryan A.
2009-01-01
Data Collection 5 processing for the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the NASA Earth Observing System EOS Terra and Aqua spacecraft includes an algorithm for detecting multilayered clouds in daytime. The main objective of this algorithm is to detect multilayered cloud scenes, specifically optically thin ice cloud overlying a lower-level water cloud, that presents difficulties for retrieving cloud effective radius using single layer plane-parallel cloud models. The algorithm uses the MODIS 0.94 micron water vapor band along with CO2 bands to obtain two above-cloud precipitable water retrievals, the difference of which, in conjunction with additional tests, provides a map of where multilayered clouds might potentially exist. The presence of a multilayered cloud results in a large difference in retrievals of above-cloud properties between the CO2 and the 0.94 micron methods. In this paper the MODIS multilayered cloud algorithm is described, results of using the algorithm over example scenes are shown, and global statistics for multilayered clouds as observed by MODIS are discussed. A theoretical study of the algorithm behavior for simulated multilayered clouds is also given. Results are compared to two other comparable passive imager methods. A set of standard cloudy atmospheric profiles developed during the course of this investigation is also presented. The results lead to the conclusion that the MODIS multilayer cloud detection algorithm has some skill in identifying multilayered clouds with different thermodynamic phases
Matched filter based detection of floating mines in IR spacetime
NASA Astrophysics Data System (ADS)
Borghgraef, Alexander; Lapierre, Fabian; Philips, Wilfried; Acheroy, Marc
2009-09-01
Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging,search-and-rescue and perimeter or harbour defense. IR video was chosen for its day-and-night imaging capability, and its availability on military vessels. Detection is difficult because a rough sea is seen as a dynamic background of moving objects with size order, shape and temperature similar to those of the floating mine. We do find a determinant characteristic in the target's periodic motion, which differs from that of the propagating surface waves composing the background. The classical detection and tracking approaches give bad results when applied to this problem. While background detection algorithms assume a quasi-static background, the sea surface is actually very dynamic, causing this category of algorithms to fail. Kalman or particle filter algorithms on the other hand, which stress temporal coherence, suffer from tracking loss due to occlusions and the great noise level of the image. We propose an innovative approach. This approach uses the periodicity of the objects movement and thus its temporal coherence. The principle is to consider the video data as a spacetime volume similar to a hyperspectral data cube by replacing the spectral axis with a temporal axis. We can then apply algorithms developed for hyperspectral detection problems to the detection of small floating objects. We treat the detection problem using multilinear algebra, designing a number of finite impulse response filters (FIR) maximizing the target response. The algorithm was applied to test footage of practice mines in the infrared.
3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion
Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.
2016-01-01
We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation. PMID:27827836
3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.
Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G
2016-11-02
We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.
Detection of insect damage in almonds
NASA Astrophysics Data System (ADS)
Kim, Soowon; Schatzki, Thomas F.
1999-01-01
Pinhole insect damage in natural almonds is very difficult to detect on-line. Further, evidence exists relating insect damage to aflatoxin contamination. Hence, for quality and health reasons, methods to detect and remove such damaged nuts are of great importance in this study, we explored the possibility of using x-ray imaging to detect pinhole damage in almonds by insects. X-ray film images of about 2000 almonds and x-ray linescan images of only 522 pinhole damaged almonds were obtained. The pinhole damaged region appeared slightly darker than non-damaged region in x-ray negative images. A machine recognition algorithm was developed to detect these darker regions. The algorithm used the first order and the second order information to identify the damaged region. To reduce the possibility of false positive results due to germ region in high resolution images, germ detection and removal routines were also included. With film images, the algorithm showed approximately an 81 percent correct recognition ratio with only 1 percent false positives whereas line scan images correctly recognized 65 percent of pinholes with about 9 percent false positives. The algorithms was very fast and efficient requiring only minimal computation time. If implemented on line, theoretical throughput of this recognition system would be 66 nuts/second.
Wong, Carlos K H; Siu, Shing-Chung; Wan, Eric Y F; Jiao, Fang-Fang; Yu, Esther Y T; Fung, Colman S C; Wong, Ka-Wai; Leung, Angela Y M; Lam, Cindy L K
2016-05-01
The aim of the present study was to develop a simple nomogram that can be used to predict the risk of diabetes mellitus (DM) in the asymptomatic non-diabetic subjects based on non-laboratory- and laboratory-based risk algorithms. Anthropometric data, plasma fasting glucose, full lipid profile, exercise habits, and family history of DM were collected from Chinese non-diabetic subjects aged 18-70 years. Logistic regression analysis was performed on a random sample of 2518 subjects to construct non-laboratory- and laboratory-based risk assessment algorithms for detection of undiagnosed DM; both algorithms were validated on data of the remaining sample (n = 839). The Hosmer-Lemeshow test and area under the receiver operating characteristic (ROC) curve (AUC) were used to assess the calibration and discrimination of the DM risk algorithms. Of 3357 subjects recruited, 271 (8.1%) had undiagnosed DM defined by fasting glucose ≥7.0 mmol/L or 2-h post-load plasma glucose ≥11.1 mmol/L after an oral glucose tolerance test. The non-laboratory-based risk algorithm, with scores ranging from 0 to 33, included age, body mass index, family history of DM, regular exercise, and uncontrolled blood pressure; the laboratory-based risk algorithm, with scores ranging from 0 to 37, added triglyceride level to the risk factors. Both algorithms demonstrated acceptable calibration (Hosmer-Lemeshow test: P = 0.229 and P = 0.483) and discrimination (AUC 0.709 and 0.711) for detection of undiagnosed DM. A simple-to-use nomogram for detecting undiagnosed DM has been developed using validated non-laboratory-based and laboratory-based risk algorithms. © 2015 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
Novel trace chemical detection algorithms: a comparative study
NASA Astrophysics Data System (ADS)
Raz, Gil; Murphy, Cara; Georgan, Chelsea; Greenwood, Ross; Prasanth, R. K.; Myers, Travis; Goyal, Anish; Kelley, David; Wood, Derek; Kotidis, Petros
2017-05-01
Algorithms for standoff detection and estimation of trace chemicals in hyperspectral images in the IR band are a key component for a variety of applications relevant to law-enforcement and the intelligence communities. Performance of these methods is impacted by the spectral signature variability due to presence of contaminants, surface roughness, nonlinear dependence on abundances as well as operational limitations on the compute platforms. In this work we provide a comparative performance and complexity analysis of several classes of algorithms as a function of noise levels, error distribution, scene complexity, and spatial degrees of freedom. The algorithm classes we analyze and test include adaptive cosine estimator (ACE and modifications to it), compressive/sparse methods, Bayesian estimation, and machine learning. We explicitly call out the conditions under which each algorithm class is optimal or near optimal as well as their built-in limitations and failure modes.
Explosive Detection in Aviation Applications Using CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martz, H E; Crawford, C R
2011-02-15
CT scanners are deployed world-wide to detect explosives in checked and carry-on baggage. Though very similar to single- and dual-energy multi-slice CT scanners used today in medical imaging, some recently developed explosives detection scanners employ multiple sources and detector arrays to eliminate mechanical rotation of a gantry, photon counting detectors for spectral imaging, and limited number of views to reduce cost. For each bag scanned, the resulting reconstructed images are first processed by automated threat recognition algorithms to screen for explosives and other threats. Human operators review the images only when these automated algorithms report the presence of possible threats.more » The US Department of Homeland Security (DHS) has requirements for future scanners that include dealing with a larger number of threats, higher probability of detection, lower false alarm rates and lower operating costs. One tactic that DHS is pursuing to achieve these requirements is to augment the capabilities of the established security vendors with third-party algorithm developers. A third-party in this context refers to academics and companies other than the established vendors. DHS is particularly interested in exploring the model that has been used very successfully by the medical imaging industry, in which university researchers develop algorithms that are eventually deployed in commercial medical imaging equipment. The purpose of this paper is to discuss opportunities for third-parties to develop advanced reconstruction and threat detection algorithms.« less
Friedman, Lee; Rigas, Ioannis; Abdulin, Evgeny; Komogortsev, Oleg V
2018-05-15
Nystrӧm and Holmqvist have published a method for the classification of eye movements during reading (ONH) (Nyström & Holmqvist, 2010). When we applied this algorithm to our data, the results were not satisfactory, so we modified the algorithm (now the MNH) to better classify our data. The changes included: (1) reducing the amount of signal filtering, (2) excluding a new type of noise, (3) removing several adaptive thresholds and replacing them with fixed thresholds, (4) changing the way that the start and end of each saccade was determined, (5) employing a new algorithm for detecting PSOs, and (6) allowing a fixation period to either begin or end with noise. A new method for the evaluation of classification algorithms is presented. It was designed to provide comprehensive feedback to an algorithm developer, in a time-efficient manner, about the types and numbers of classification errors that an algorithm produces. This evaluation was conducted by three expert raters independently, across 20 randomly chosen recordings, each classified by both algorithms. The MNH made many fewer errors in determining when saccades start and end, and it also detected some fixations and saccades that the ONH did not. The MNH fails to detect very small saccades. We also evaluated two additional algorithms: the EyeLink Parser and a more current, machine-learning-based algorithm. The EyeLink Parser tended to find more saccades that ended too early than did the other methods, and we found numerous problems with the output of the machine-learning-based algorithm.
Meyer, N; McMenamin, J; Robertson, C; Donaghy, M; Allardice, G; Cooper, D
2008-07-01
In 18 weeks, Health Protection Scotland (HPS) deployed a syndromic surveillance system to early-detect natural or intentional disease outbreaks during the G8 Summit 2005 at Gleneagles, Scotland. The system integrated clinical and non-clinical datasets. Clinical datasets included Accident & Emergency (A&E) syndromes, and General Practice (GPs) codes grouped into syndromes. Non-clinical data included telephone calls to a nurse helpline, laboratory test orders, and hotel staff absenteeism. A cumulative sum-based detection algorithm and a log-linear regression model identified signals in the data. The system had a fax-based track for real-time identification of unusual presentations. Ninety-five signals were triggered by the detection algorithms and four forms were faxed to HPS. Thirteen signals were investigated. The system successfully complemented a traditional surveillance system in identifying a small cluster of gastroenteritis among the police force and triggered interventions to prevent further cases.
Ducie, Jennifer A; Eriksson, Ane Gerda Zahl; Ali, Narisha; McGree, Michaela E; Weaver, Amy L; Bogani, Giorgio; Cliby, William A; Dowdy, Sean C; Bakkum-Gamez, Jamie N; Soslow, Robert A; Keeney, Gary L; Abu-Rustum, Nadeem R; Mariani, Andrea; Leitao, Mario M
2017-12-01
To determine if a sentinel lymph node (SLN) mapping algorithm will detect metastatic nodal disease in patients with intermediate-/high-risk endometrial carcinoma. Patients were identified and surgically staged at two collaborating institutions. The historical cohort (2004-2008) at one institution included patients undergoing complete pelvic and paraaortic lymphadenectomy to the renal veins (LND cohort). At the second institution an SLN mapping algorithm, including pathologic ultra-staging, was performed (2006-2013) (SLN cohort). Intermediate-risk was defined as endometrioid histology (any grade), ≥50% myometrial invasion; high-risk as serous or clear cell histology (any myometrial invasion). Patients with gross peritoneal disease were excluded. Isolated tumor cells, micro-metastases, and macro-metastases were considered node-positive. We identified 210 patients in the LND cohort, 202 in the SLN cohort. Nodal assessment was performed for most patients. In the intermediate-risk group, stage IIIC disease was diagnosed in 30/107 (28.0%) (LND), 29/82 (35.4%) (SLN) (P=0.28). In the high-risk group, stage IIIC disease was diagnosed in 20/103 (19.4%) (LND), 26 (21.7%) (SLN) (P=0.68). Paraaortic lymph node (LN) assessment was performed significantly more often in intermediate-/high-risk groups in the LND cohort (P<0.001). In the intermediate-risk group, paraaortic LN metastases were detected in 20/96 (20.8%) (LND) vs. 3/28 (10.7%) (SLN) (P=0.23). In the high-risk group, paraaortic LN metastases were detected in 13/82 (15.9%) (LND) and 10/56 (17.9%) (SLN) (%, P=0.76). SLN mapping algorithm provides similar detection rates of stage IIIC endometrial cancer. The SLN algorithm does not compromise overall detection compared to standard LND. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jung, Sungmo; Kim, Jong Hyun; Cagalaban, Giovanni; Lim, Ji-Hoon; Kim, Seoksoo
More recently, botnet-based cyber attacks, including a spam mail or a DDos attack, have sharply increased, which poses a fatal threat to Internet services. At present, antivirus businesses make it top priority to detect malicious code in the shortest time possible (Lv.2), based on the graph showing a relation between spread of malicious code and time, which allows them to detect after malicious code occurs. Despite early detection, however, it is not possible to prevent malicious code from occurring. Thus, we have developed an algorithm that can detect precursor symptoms at Lv.1 to prevent a cyber attack using an evasion method of 'an executing environment aware attack' by analyzing system behaviors and monitoring memory.
A new approach to optic disc detection in human retinal images using the firefly algorithm.
Rahebi, Javad; Hardalaç, Fırat
2016-03-01
There are various methods and algorithms to detect the optic discs in retinal images. In recent years, much attention has been given to the utilization of the intelligent algorithms. In this paper, we present a new automated method of optic disc detection in human retinal images using the firefly algorithm. The firefly intelligent algorithm is an emerging intelligent algorithm that was inspired by the social behavior of fireflies. The population in this algorithm includes the fireflies, each of which has a specific rate of lighting or fitness. In this method, the insects are compared two by two, and the less attractive insects can be observed to move toward the more attractive insects. Finally, one of the insects is selected as the most attractive, and this insect presents the optimum response to the problem in question. Here, we used the light intensity of the pixels of the retinal image pixels instead of firefly lightings. The movement of these insects due to local fluctuations produces different light intensity values in the images. Because the optic disc is the brightest area in the retinal images, all of the insects move toward brightest area and thus specify the location of the optic disc in the image. The results of implementation show that proposed algorithm could acquire an accuracy rate of 100 % in DRIVE dataset, 95 % in STARE dataset, and 94.38 % in DiaRetDB1 dataset. The results of implementation reveal high capability and accuracy of proposed algorithm in the detection of the optic disc from retinal images. Also, recorded required time for the detection of the optic disc in these images is 2.13 s for DRIVE dataset, 2.81 s for STARE dataset, and 3.52 s for DiaRetDB1 dataset accordingly. These time values are average value.
Cremers, Charlotte H P; Dankbaar, Jan Willem; Vergouwen, Mervyn D I; Vos, Pieter C; Bennink, Edwin; Rinkel, Gabriel J E; Velthuis, Birgitta K; van der Schaaf, Irene C
2015-05-01
Tracer delay-sensitive perfusion algorithms in CT perfusion (CTP) result in an overestimation of the extent of ischemia in thromboembolic stroke. In diagnosing delayed cerebral ischemia (DCI) after aneurysmal subarachnoid hemorrhage (aSAH), delayed arrival of contrast due to vasospasm may also overestimate the extent of ischemia. We investigated the diagnostic accuracy of tracer delay-sensitive and tracer delay-insensitive algorithms for detecting DCI. From a prospectively collected series of aSAH patients admitted between 2007-2011, we included patients with any clinical deterioration other than rebleeding within 21 days after SAH who underwent NCCT/CTP/CTA imaging. Causes of clinical deterioration were categorized into DCI and no DCI. CTP maps were calculated with tracer delay-sensitive and tracer delay-insensitive algorithms and were visually assessed for the presence of perfusion deficits by two independent observers with different levels of experience. The diagnostic value of both algorithms was calculated for both observers. Seventy-one patients were included. For the experienced observer, the positive predictive values (PPVs) were 0.67 for the delay-sensitive and 0.66 for the delay-insensitive algorithm, and the negative predictive values (NPVs) were 0.73 and 0.74. For the less experienced observer, PPVs were 0.60 for both algorithms, and NPVs were 0.66 for the delay-sensitive and 0.63 for the delay-insensitive algorithm. Test characteristics are comparable for tracer delay-sensitive and tracer delay-insensitive algorithms for the visual assessment of CTP in diagnosing DCI. This indicates that both algorithms can be used for this purpose.
CHARACTERIZATION OF THE COMPLETE FIBER NETWORK TOPOLOGY OF PLANAR FIBROUS TISSUES AND SCAFFOLDS
D'Amore, Antonio; Stella, John A.; Wagner, William R.; Sacks, Michael S.
2010-01-01
Understanding how engineered tissue scaffold architecture affects cell morphology, metabolism, phenotypic expression, as well as predicting material mechanical behavior have recently received increased attention. In the present study, an image-based analysis approach that provides an automated tool to characterize engineered tissue fiber network topology is presented. Micro-architectural features that fully defined fiber network topology were detected and quantified, which include fiber orientation, connectivity, intersection spatial density, and diameter. Algorithm performance was tested using scanning electron microscopy (SEM) images of electrospun poly(ester urethane)urea (ES-PEUU) scaffolds. SEM images of rabbit mesenchymal stem cell (MSC) seeded collagen gel scaffolds and decellularized rat carotid arteries were also analyzed to further evaluate the ability of the algorithm to capture fiber network morphology regardless of scaffold type and the evaluated size scale. The image analysis procedure was validated qualitatively and quantitatively, comparing fiber network topology manually detected by human operators (n=5) with that automatically detected by the algorithm. Correlation values between manual detected and algorithm detected results for the fiber angle distribution and for the fiber connectivity distribution were 0.86 and 0.93 respectively. Algorithm detected fiber intersections and fiber diameter values were comparable (within the mean ± standard deviation) with those detected by human operators. This automated approach identifies and quantifies fiber network morphology as demonstrated for three relevant scaffold types and provides a means to: (1) guarantee objectivity, (2) significantly reduce analysis time, and (3) potentiate broader analysis of scaffold architecture effects on cell behavior and tissue development both in vitro and in vivo. PMID:20398930
Time Series Discord Detection in Medical Data using a Parallel Relational Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.
Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less
Time Series Discord Detection in Medical Data using a Parallel Relational Database [PowerPoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodbridge, Diane; Wilson, Andrew T.; Rintoul, Mark Daniel
Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less
Automatic multimodal detection for long-term seizure documentation in epilepsy.
Fürbass, F; Kampusch, S; Kaniusas, E; Koren, J; Pirker, S; Hopfengärtner, R; Stefan, H; Kluge, T; Baumgartner, C
2017-08-01
This study investigated sensitivity and false detection rate of a multimodal automatic seizure detection algorithm and the applicability to reduced electrode montages for long-term seizure documentation in epilepsy patients. An automatic seizure detection algorithm based on EEG, EMG, and ECG signals was developed. EEG/ECG recordings of 92 patients from two epilepsy monitoring units including 494 seizures were used to assess detection performance. EMG data were extracted by bandpass filtering of EEG signals. Sensitivity and false detection rate were evaluated for each signal modality and for reduced electrode montages. All focal seizures evolving to bilateral tonic-clonic (BTCS, n=50) and 89% of focal seizures (FS, n=139) were detected. Average sensitivity in temporal lobe epilepsy (TLE) patients was 94% and 74% in extratemporal lobe epilepsy (XTLE) patients. Overall detection sensitivity was 86%. Average false detection rate was 12.8 false detections in 24h (FD/24h) for TLE and 22 FD/24h in XTLE patients. Utilization of 8 frontal and temporal electrodes reduced average sensitivity from 86% to 81%. Our automatic multimodal seizure detection algorithm shows high sensitivity with full and reduced electrode montages. Evaluation of different signal modalities and electrode montages paces the way for semi-automatic seizure documentation systems. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Nika, Varvara; Babyn, Paul; Zhu, Hongmei
2014-07-01
Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of magnetic resonance imaging (MRI) scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are being used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. We present an improved version of the EigenBlockCD algorithm, named the EigenBlockCD-2. The EigenBlockCD-2 algorithm performs an initial global registration and identifies the changes between serial MR images of the brain. Blocks of pixels from a baseline scan are used to train local dictionaries to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between [Formula: see text] and [Formula: see text] norms as two possible similarity measures in the improved EigenBlockCD-2 algorithm. We show the advantages of the [Formula: see text] norm over the [Formula: see text] norm both theoretically and numerically. We also demonstrate the performance of the new EigenBlockCD-2 algorithm for detecting changes of MR images and compare our results with those provided in the recent literature. Experimental results with both simulated and real MRI scans show that our improved EigenBlockCD-2 algorithm outperforms the previous methods. It detects clinical changes while ignoring the changes due to the patient's position and other acquisition artifacts.
GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.
Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim
2016-08-01
In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.
Detection of core-periphery structure in networks based on 3-tuple motifs
NASA Astrophysics Data System (ADS)
Ma, Chuang; Xiang, Bing-Bing; Chen, Han-Shuang; Small, Michael; Zhang, Hai-Feng
2018-05-01
Detecting mesoscale structure, such as community structure, is of vital importance for analyzing complex networks. Recently, a new mesoscale structure, core-periphery (CP) structure, has been identified in many real-world systems. In this paper, we propose an effective algorithm for detecting CP structure based on a 3-tuple motif. In this algorithm, we first define a 3-tuple motif in terms of the patterns of edges as well as the property of nodes, and then a motif adjacency matrix is constructed based on the 3-tuple motif. Finally, the problem is converted to find a cluster that minimizes the smallest motif conductance. Our algorithm works well in different CP structures: including single or multiple CP structure, and local or global CP structures. Results on the synthetic and the empirical networks validate the high performance of our method.
Masciotra, Silvina; Smith, Amanda J; Youngpairoj, Ae S; Sprinkle, Patrick; Miles, Isa; Sionean, Catlainn; Paz-Bailey, Gabriela; Johnson, Jeffrey A; Owen, S Michele
2013-12-01
Until recently most testing algorithms in the United States (US) utilized Western blot (WB) as the supplemental test. CDC has proposed an algorithm for HIV diagnosis which includes an initial screen with a Combo Antigen/Antibody 4th generation-immunoassay (IA), followed by an HIV-1/2 discriminatory IA of initially reactive-IA specimens. Discordant results in the proposed algorithm are resolved by nucleic acid-amplification testing (NAAT). Evaluate the results obtained with the CDC proposed laboratory-based algorithm using specimens from men who have sex with men (MSM) obtained in five metropolitan statistical areas (MSAs). Specimens from 992 MSM from five MSAs participating in the CDC's National HIV Behavioral Surveillance System in 2011 were tested at local facilities and CDC. The five MSAs utilized algorithms of various screening assays and specimen types, and WB as the supplemental test. At the CDC, serum/plasma specimens were screened with 4th generation-IA and the Multispot HIV-1/HIV-2 discriminatory assay was used as the supplemental test. NAAT was used to resolve discordant results and to further identify acute HIV infections from all screened-non-reactive missed by the proposed algorithm. Performance of the proposed algorithm was compared to site-specific WB-based algorithms. The proposed algorithm detected 254 infections. The WB-based algorithms detected 19 fewer infections; 4 by oral fluid (OF) rapid testing and 15 by WB supplemental testing (12 OF and 3 blood). One acute infection was identified by NAAT from all screened-non-reactive specimens. The proposed algorithm identified more infections than the WB-based algorithms in a high-risk MSM population. OF testing was associated with most of the discordant results between algorithms. HIV testing with the proposed algorithm can increase diagnosis of infected individuals, including early infections. Published by Elsevier B.V.
Grewal, Dilraj S; Tanna, Angelo P
2013-03-01
With the rapid adoption of spectral domain optical coherence tomography (SDOCT) in clinical practice and the recent advances in software technology, there is a need for a review of the literature on glaucoma detection and progression analysis algorithms designed for the commercially available instruments. Peripapillary retinal nerve fiber layer (RNFL) thickness and macular thickness, including segmental macular thickness calculation algorithms, have been demonstrated to be repeatable and reproducible, and have a high degree of diagnostic sensitivity and specificity in discriminating between healthy and glaucomatous eyes across the glaucoma continuum. Newer software capabilities such as glaucoma progression detection algorithms provide an objective analysis of longitudinally obtained structural data that enhances our ability to detect glaucomatous progression. RNFL measurements obtained with SDOCT appear more sensitive than time domain OCT (TDOCT) for glaucoma progression detection; however, agreement with the assessments of visual field progression is poor. Over the last few years, several studies have been performed to assess the diagnostic performance of SDOCT structural imaging and its validity in assessing glaucoma progression. Most evidence suggests that SDOCT performs similarly to TDOCT for glaucoma diagnosis; however, SDOCT may be superior for the detection of early stage disease. With respect to progression detection, SDOCT represents an important technological advance because of its improved resolution and repeatability. Advancements in RNFL thickness quantification, segmental macular thickness calculation and progression detection algorithms, when used correctly, may help to improve our ability to diagnose and manage glaucoma.
NASA Astrophysics Data System (ADS)
Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.
2017-08-01
Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.
Kasturi, Rangachar; Goldgof, Dmitry; Soundararajan, Padmanabhan; Manohar, Vasant; Garofolo, John; Bowers, Rachel; Boonstra, Matthew; Korzhova, Valentina; Zhang, Jing
2009-02-01
Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.
Seismic data fusion anomaly detection
NASA Astrophysics Data System (ADS)
Harrity, Kyle; Blasch, Erik; Alford, Mark; Ezekiel, Soundararajan; Ferris, David
2014-06-01
Detecting anomalies in non-stationary signals has valuable applications in many fields including medicine and meteorology. These include uses such as identifying possible heart conditions from an Electrocardiography (ECG) signals or predicting earthquakes via seismographic data. Over the many choices of anomaly detection algorithms, it is important to compare possible methods. In this paper, we examine and compare two approaches to anomaly detection and see how data fusion methods may improve performance. The first approach involves using an artificial neural network (ANN) to detect anomalies in a wavelet de-noised signal. The other method uses a perspective neural network (PNN) to analyze an arbitrary number of "perspectives" or transformations of the observed signal for anomalies. Possible perspectives may include wavelet de-noising, Fourier transform, peak-filtering, etc.. In order to evaluate these techniques via signal fusion metrics, we must apply signal preprocessing techniques such as de-noising methods to the original signal and then use a neural network to find anomalies in the generated signal. From this secondary result it is possible to use data fusion techniques that can be evaluated via existing data fusion metrics for single and multiple perspectives. The result will show which anomaly detection method, according to the metrics, is better suited overall for anomaly detection applications. The method used in this study could be applied to compare other signal processing algorithms.
Linear feature detection algorithm for astronomical surveys - I. Algorithm description
NASA Astrophysics Data System (ADS)
Bektešević, Dino; Vinković, Dejan
2017-11-01
Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.
An efficient parallel termination detection algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, A. H.; Crivelli, S.; Jessup, E. R.
2004-05-27
Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Ofmore » these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.« less
Adaptive Gaussian mixture models for pre-screening in GPR data
NASA Astrophysics Data System (ADS)
Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.
2011-06-01
Due to the large amount of data generated by vehicle-mounted ground penetrating radar (GPR) antennae arrays, advanced feature extraction and classification can only be performed on a small subset of data during real-time operation. As a result, most GPR based landmine detection systems implement "pre-screening" algorithms to processes all of the data generated by the antennae array and identify locations with anomalous signatures for more advanced processing. These pre-screening algorithms must be computationally efficient and obtain high probability of detection, but can permit a false alarm rate which might be higher than the total system requirements. Many approaches to prescreening have previously been proposed, including linear prediction coefficients, the LMS algorithm, and CFAR-based approaches. Similar pre-screening techniques have also been developed in the field of video processing to identify anomalous behavior or anomalous objects. One such algorithm, an online k-means approximation to an adaptive Gaussian mixture model (GMM), is particularly well-suited to application for pre-screening in GPR data due to its computational efficiency, non-linear nature, and relevance of the logic underlying the algorithm to GPR processing. In this work we explore the application of an adaptive GMM-based approach for anomaly detection from the video processing literature to pre-screening in GPR data. Results with the ARA Nemesis landmine detection system demonstrate significant pre-screening performance improvements compared to alternative approaches, and indicate that the proposed algorithm is a complimentary technique to existing methods.
Comparison of Traditional and Reverse Syphilis Screening Algorithms in Medical Health Checkups.
Nah, Eun Hee; Cho, Seon; Kim, Suyoung; Cho, Han Ik; Chai, Jong Yil
2017-11-01
The syphilis diagnostic algorithms applied in different countries vary significantly depending on the local syphilis epidemiology and other considerations, including the expected workload, the need for automation in the laboratory and budget factors. This study was performed to investigate the efficacy of traditional and reverse syphilis diagnostic algorithms during general health checkups. In total, 1,000 blood specimens were obtained from 908 men and 92 women during their regular health checkups. Traditional screening and reverse screening were applied to the same specimens using automatic rapid plasma regain (RPR) and Treponema pallidum latex agglutination (TPLA) tests, respectively. Specimens that were reverse algorithm (TPLA) reactive, were subjected to a second treponemal test performed by using the chemiluminescent microparticle immunoassay (CMIA). Of the 1,000 specimens tested, 68 (6.8%) were reactive by reverse screening (TPLA) compared with 11 (1.1%) by traditional screening (RPR). The traditional algorithm failed to detect 48 specimens [TPLA(+)/RPR(-)/CMIA(+)]. The median TPLA cutoff index (COI) was higher in CMIA-reactive cases than in CMIA-nonreactive cases (90.5 vs 12.5 U). The reverse screening algorithm could detect the subjects with possible latent syphilis who were not detected by the traditional algorithm. Those individuals could be provided with opportunities for evaluating syphilis during their health checkups. The COI values of the initial TPLA test may be helpful in excluding false-positive TPLA test results in the reverse algorithm. © The Korean Society for Laboratory Medicine
Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.
2016-06-07
A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.
White blood cell segmentation by circle detection using electromagnetism-like optimization.
Cuevas, Erik; Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo
2013-01-01
Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability.
White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization
Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo
2013-01-01
Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability. PMID:23476713
T-wave end detection using neural networks and Support Vector Machines.
Suárez-León, Alexander Alexeis; Varon, Carolina; Willems, Rik; Van Huffel, Sabine; Vázquez-Seisdedos, Carlos Román
2018-05-01
In this paper we propose a new approach for detecting the end of the T-wave in the electrocardiogram (ECG) using Neural Networks and Support Vector Machines. Both, Multilayer Perceptron (MLP) neural networks and Fixed-Size Least-Squares Support Vector Machines (FS-LSSVM) were used as regression algorithms to determine the end of the T-wave. Different strategies for selecting the training set such as random selection, k-means, robust clustering and maximum quadratic (Rényi) entropy were evaluated. Individual parameters were tuned for each method during training and the results are given for the evaluation set. A comparison between MLP and FS-LSSVM approaches was performed. Finally, a fair comparison of the FS-LSSVM method with other state-of-the-art algorithms for detecting the end of the T-wave was included. The experimental results show that FS-LSSVM approaches are more suitable as regression algorithms than MLP neural networks. Despite the small training sets used, the FS-LSSVM methods outperformed the state-of-the-art techniques. FS-LSSVM can be successfully used as a T-wave end detection algorithm in ECG even with small training set sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Tang, Min; Curtis, Sean; Yoon, Sung-Eui; Manocha, Dinesh
2009-01-01
We present an interactive algorithm for continuous collision detection between deformable models. We introduce multiple techniques to improve the culling efficiency and the overall performance of continuous collision detection. First, we present a novel formulation for continuous normal cones and use these normal cones to efficiently cull large regions of the mesh as part of self-collision tests. Second, we introduce the concept of "procedural representative triangles" to remove all redundant elementary tests between nonadjacent triangles. Finally, we exploit the mesh connectivity and introduce the concept of "orphan sets" to eliminate redundant elementary tests between adjacent triangle primitives. In practice, we can reduce the number of elementary tests by two orders of magnitude. These culling techniques have been combined with bounding volume hierarchies and can result in one order of magnitude performance improvement as compared to prior collision detection algorithms for deformable models. We highlight the performance of our algorithm on several benchmarks, including cloth simulations, N-body simulations, and breaking objects.
Comparison of the MODIS Collection 5 Multilayer Cloud Detection Product with CALIPSO
NASA Technical Reports Server (NTRS)
Platnick, Steven; Wind, Gala; King, Michael D.; Holz, Robert E.; Ackerman, Steven A.; Nagle, Fred W.
2010-01-01
CALIPSO, launched in June 2006, provides global active remote sensing measurements of clouds and aerosols that can be used for validation of a variety of passive imager retrievals derived from instruments flying on the Aqua spacecraft and other A-Train platforms. The most recent processing effort for the MODIS Atmosphere Team, referred to as the Collection 5 scream, includes a research-level multilayer cloud detection algorithm that uses both thermodynamic phase information derived from a combination of solar and thermal emission bands to discriminate layers of different phases, as well as true layer separation discrimination using a moderately absorbing water vapor band. The multilayer detection algorithm is designed to provide a means of assessing the applicability of 1D cloud models used in the MODIS cloud optical and microphysical product retrieval, which are generated at a 1 km resolution. Using pixel-level collocations of MODIS Aqua, CALIOP, we investigate the global performance of multilayer cloud detection algorithms (and thermodynamic phase).
Automated peroperative assessment of stents apposition from OCT pullbacks.
Dubuisson, Florian; Péry, Emilie; Ouchchane, Lemlih; Combaret, Nicolas; Kauffmann, Claude; Souteyrand, Géraud; Motreff, Pascal; Sarry, Laurent
2015-04-01
This study's aim was to control the stents apposition by automatically analyzing endovascular optical coherence tomography (OCT) sequences. Lumen is detected using threshold, morphological and gradient operators to run a Dijkstra algorithm. Wrong detection tagged by the user and caused by bifurcation, struts'presence, thrombotic lesions or dissections can be corrected using a morphing algorithm. Struts are also segmented by computing symmetrical and morphological operators. Euclidian distance between detected struts and wall artery initializes a stent's complete distance map and missing data are interpolated with thin-plate spline functions. Rejection of detected outliers, regularization of parameters by generalized cross-validation and using the one-side cyclic property of the map also optimize accuracy. Several indices computed from the map provide quantitative values of malapposition. Algorithm was run on four in-vivo OCT sequences including different incomplete stent apposition's cases. Comparison with manual expert measurements validates the segmentation׳s accuracy and shows an almost perfect concordance of automated results. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H
2017-09-01
Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach recently developed in our laboratory. The proposed TifMA algorithm consistently provided higher detection rates than the other three methods, with accuracies greater than 95% for all data. Moreover, our algorithm was able to pinpoint the start and end times of the MNA with an error of less than 1 s in duration, whereas the next-best algorithm had a detection error of more than 2.2 s. The final, most challenging, dataset was collected to verify the performance of the algorithm in discriminating between corrupted data that were usable for accurate HR estimations and data that were nonusable. It was found that on average 48% of the data segments were found to have MNA, and of these, 38% could be used to provide reliable HR estimation.
Radar Detection of Marine Mammals
2011-09-30
BFT-BPT algorithm for use with our radar data. This track - before - detect algorithm had been effective in enhancing small but persistent signatures in...will be possible with the detect before track algorithm. 4 We next evaluated the track before detect algorithm, the BFT-BPT, on the CEDAR data
Detection of Unknown LEO Satellite Using Radar Measurements
NASA Astrophysics Data System (ADS)
Kamensky, S.; Samotokhin, A.; Khutorovsky, Z.; Alfriend, T.
While processing of the radar information aimed at satellite catalog maintenance some measurements do not correlate with cataloged and tracked satellites. These non-correlated measurements participate in the detection (primary orbit determination) of new (not cataloged) satellites. The satellite is considered newly detected when it is missing in the catalog and the primary orbit determination on the basis of the non-correlated measurements provides the accuracy sufficient for reliable correlation of future measurements. We will call this the detection condition. One non-correlated measurement in real conditions does not have enough accuracy and thus does not satisfy the detection condition. Two measurements separated by a revolution or more normally provides orbit determination with accuracy sufficient for selection of other measurements. However, it is not always possible to say with high probability (close to 1) that two measurements belong to one satellite. Three measurements for different revolutions, which are included into one orbit, have significantly higher chances to belong to one satellite. Thus the suggested detection (primary orbit determination) algorithm looks for three uncorrelated measurements in different revolutions for which we can determine the orbit inscribing them. The detection procedure based on search for the triplets is rather laborious. Thus only relatively high efficiency can be the reason for its practical implementation. The work presents the detailed description of the suggested detection procedure based on the search for triplets of uncorrelated measurements (for radar measurements). The break-ups of the tracked satellites provide the most difficult conditions for the operation of the detection algorithm and reveal explicitly its characteristics. The characteristics of time efficiency and reliability of the detected orbits are of maximum interest. Within this work we suggest to determine these characteristics using simulation of break-ups with further acquisition of measurements generated by the fragments. In particular, using simulation we can not only evaluate the characteristics of the algorithm but adjust its parameters for certain conditions: the orbit of the fragmented satellite, the features of the break-up, capabilities of detection radars etc. We describe the algorithm performing the simulation of radar measurements produced by the fragments of the parent satellite. This algorithm accounts of the basic factors affecting the characteristics of time efficiency and reliability of the detection. The catalog maintenance algorithm includes two major components detection and tracking. These are two processes permanently interacting with each other. This is actually in place for the processing of real radar data. The simulation must take this into account since one cannot obtain reliable characteristics of detection procedure simulating only this process. Thus we simulated both processes in their interaction. The work presents the results of simulation for the simplest case of a break-up in near-circular orbit with insignificant atmospheric drag. The simulations show rather high efficiency. We demonstrate as well that the characteristics of time efficiency and reliability of determined orbits essentially depend on the density of the observed break-up fragments.
Redundancy management of multiple KT-70 inertial measurement units applicable to the space shuttle
NASA Technical Reports Server (NTRS)
Cook, L. J.
1975-01-01
Results of an investigation of velocity failure detection and isolation for 3 inertial measuring units (IMU) and 2 inertial measuring units (IMU) configurations are presented. The failure detection and isolation algorithm performance was highly successful and most types of velocity errors were detected and isolated. The failure detection and isolation algorithm also included attitude FDI but was not evaluated because of the lack of time and low resolution in the gimbal angle synchro outputs. The shuttle KT-70 IMUs will have dual-speed resolvers and high resolution gimbal angle readouts. It was demonstrated by these tests that a single computer utilizing a serial data bus can successfully control a redundant 3-IMU system and perform FDI.
Firefly Algorithm in detection of TEC seismo-ionospheric anomalies
NASA Astrophysics Data System (ADS)
Akhoondzadeh, Mehdi
2015-07-01
Anomaly detection in time series of different earthquake precursors is an essential introduction to create an early warning system with an allowable uncertainty. Since these time series are more often non linear, complex and massive, therefore the applied predictor method should be able to detect the discord patterns from a large data in a short time. This study acknowledges Firefly Algorithm (FA) as a simple and robust predictor to detect the TEC (Total Electron Content) seismo-ionospheric anomalies around the time of the some powerful earthquakes including Chile (27 February 2010), Varzeghan (11 August 2012) and Saravan (16 April 2013). Outstanding anomalies were observed 7 and 5 days before the Chile and Varzeghan earthquakes, respectively and also 3 and 8 days prior to the Saravan earthquake.
Urbanowicz, Ryan J; Kiralis, Jeff; Sinnott-Armstrong, Nicholas A; Heberling, Tamra; Fisher, Jonathan M; Moore, Jason H
2012-10-01
Geneticists who look beyond single locus disease associations require additional strategies for the detection of complex multi-locus effects. Epistasis, a multi-locus masking effect, presents a particular challenge, and has been the target of bioinformatic development. Thorough evaluation of new algorithms calls for simulation studies in which known disease models are sought. To date, the best methods for generating simulated multi-locus epistatic models rely on genetic algorithms. However, such methods are computationally expensive, difficult to adapt to multiple objectives, and unlikely to yield models with a precise form of epistasis which we refer to as pure and strict. Purely and strictly epistatic models constitute the worst-case in terms of detecting disease associations, since such associations may only be observed if all n-loci are included in the disease model. This makes them an attractive gold standard for simulation studies considering complex multi-locus effects. We introduce GAMETES, a user-friendly software package and algorithm which generates complex biallelic single nucleotide polymorphism (SNP) disease models for simulation studies. GAMETES rapidly and precisely generates random, pure, strict n-locus models with specified genetic constraints. These constraints include heritability, minor allele frequencies of the SNPs, and population prevalence. GAMETES also includes a simple dataset simulation strategy which may be utilized to rapidly generate an archive of simulated datasets for given genetic models. We highlight the utility and limitations of GAMETES with an example simulation study using MDR, an algorithm designed to detect epistasis. GAMETES is a fast, flexible, and precise tool for generating complex n-locus models with random architectures. While GAMETES has a limited ability to generate models with higher heritabilities, it is proficient at generating the lower heritability models typically used in simulation studies evaluating new algorithms. In addition, the GAMETES modeling strategy may be flexibly combined with any dataset simulation strategy. Beyond dataset simulation, GAMETES could be employed to pursue theoretical characterization of genetic models and epistasis.
A framework for detecting communities of unbalanced sizes in networks
NASA Astrophysics Data System (ADS)
Žalik, Krista Rizman; Žalik, Borut
2018-01-01
Community detection in large networks has been a focus of recent research in many of fields, including biology, physics, social sciences, and computer science. Most community detection methods partition the entire network into communities, groups of nodes that have many connections within communities and few connections between them and do not identify different roles that nodes can have in communities. We propose a community detection model that integrates more different measures that can fast identify communities of different sizes and densities. We use node degree centrality, strong similarity with one node from community, maximal similarity of node to community, compactness of communities and separation between communities. Each measure has its own strength and weakness. Thus, combining different measures can benefit from the strengths of each one and eliminate encountered problems of using an individual measure. We present a fast local expansion algorithm for uncovering communities of different sizes and densities and reveals rich information on input networks. Experimental results show that the proposed algorithm is better or as effective as the other community detection algorithms for both real-world and synthetic networks while it requires less time.
Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data
NASA Astrophysics Data System (ADS)
Chierici, F.; Embriaco, D.; Morucci, S.
2017-12-01
Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.
Ansari, A H; Cherian, P J; Dereymaeker, A; Matic, V; Jansen, K; De Wispelaere, L; Dielman, C; Vervisch, J; Swarte, R M; Govaert, P; Naulaers, G; De Vos, M; Van Huffel, S
2016-09-01
After identifying the most seizure-relevant characteristics by a previously developed heuristic classifier, a data-driven post-processor using a novel set of features is applied to improve the performance. The main characteristics of the outputs of the heuristic algorithm are extracted by five sets of features including synchronization, evolution, retention, segment, and signal features. Then, a support vector machine and a decision making layer remove the falsely detected segments. Four datasets including 71 neonates (1023h, 3493 seizures) recorded in two different university hospitals, are used to train and test the algorithm without removing the dubious seizures. The heuristic method resulted in a false alarm rate of 3.81 per hour and good detection rate of 88% on the entire test databases. The post-processor, effectively reduces the false alarm rate by 34% while the good detection rate decreases by 2%. This post-processing technique improves the performance of the heuristic algorithm. The structure of this post-processor is generic, improves our understanding of the core visually determined EEG features of neonatal seizures and is applicable for other neonatal seizure detectors. The post-processor significantly decreases the false alarm rate at the expense of a small reduction of the good detection rate. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Robust crop and weed segmentation under uncontrolled outdoor illumination
USDA-ARS?s Scientific Manuscript database
A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...
Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan
2016-01-01
Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266
BgCut: automatic ship detection from UAV images.
Xu, Chao; Zhang, Dongping; Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.
BgCut: Automatic Ship Detection from UAV Images
Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches. PMID:24977182
The Simultaneous Additive and Relative SysRem Algorithm
NASA Astrophysics Data System (ADS)
Ofir, A.
2011-02-01
We present the SARS algorithm, which is a generalization of the popular SysRem detrending technique. This generalization allows including multiple external parameters in a simultaneous solution with the unknown effects. Using SARS allowed us to show that the magnitude-dependant systematic effect discovered by Mazeh et al. (2009) in the CoRoT data is probably caused by an additive -rather than relative- noise source. A post-processing scheme based on SARS performs well and indeed allows for the detection of new transit-like signals that were not previously detected.
Sanchez-Morillo, Daniel; Fernandez-Granero, Miguel A; Leon-Jimenez, Antonio
2016-08-01
Major reported factors associated with the limited effectiveness of home telemonitoring interventions in chronic respiratory conditions include the lack of useful early predictors, poor patient compliance and the poor performance of conventional algorithms for detecting deteriorations. This article provides a systematic review of existing algorithms and the factors associated with their performance in detecting exacerbations and supporting clinical decisions in patients with chronic obstructive pulmonary disease (COPD) or asthma. An electronic literature search in Medline, Scopus, Web of Science and Cochrane library was conducted to identify relevant articles published between 2005 and July 2015. A total of 20 studies (16 COPD, 4 asthma) that included research about the use of algorithms in telemonitoring interventions in asthma and COPD were selected. Differences on the applied definition of exacerbation, telemonitoring duration, acquired physiological signals and symptoms, type of technology deployed and algorithms used were found. Predictive models with good clinically reliability have yet to be defined, and are an important goal for the future development of telehealth in chronic respiratory conditions. New predictive models incorporating both symptoms and physiological signals are being tested in telemonitoring interventions with positive outcomes. However, the underpinning algorithms behind these models need be validated in larger samples of patients, for longer periods of time and with well-established protocols. In addition, further research is needed to identify novel predictors that enable the early detection of deteriorations, especially in COPD. Only then will telemonitoring achieve the aim of preventing hospital admissions, contributing to the reduction of health resource utilization and improving the quality of life of patients. © The Author(s) 2016.
Finding topological center of a geographic space via road network
NASA Astrophysics Data System (ADS)
Gao, Liang; Miao, Yanan; Qin, Yuhao; Zhao, Xiaomei; Gao, Zi-You
2015-02-01
Previous studies show that the center of a geographic space is of great importance in urban and regional studies, including study of population distribution, urban growth modeling, and scaling properties of urban systems, etc. But how to well define and how to efficiently extract the center of a geographic space are still largely unknown. Recently, Jiang et al. have presented a definition of topological center by their block detection (BD) algorithm. Despite the fact that they first introduced the definition and discovered the 'true center', in human minds, their algorithm left several redundancies in its traversal process. Here, we propose an alternative road-cycle detection (RCD) algorithm to find the topological center, which extracts the outmost road-cycle recursively. To foster the application of the topological center in related research fields, we first reproduce the BD algorithm in Python (pyBD), then implement the RCD algorithm in two ways: the ArcPy implementation (arcRCD) and the Python implementation (pyRCD). After the experiments on twenty-four typical road networks, we find that the results of our RCD algorithm are consistent with those of Jiang's BD algorithm. We also find that the RCD algorithm is at least seven times more efficient than the BD algorithm on all the ten typical road networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elmagarmid, A.K.
The availability of distributed data bases is directly affected by the timely detection and resolution of deadlocks. Consequently, mechanisms are needed to make deadlock detection algorithms resilient to failures. Presented first is a centralized algorithm that allows transactions to have multiple requests outstanding. Next, a new distributed deadlock detection algorithm (DDDA) is presented, using a global detector (GD) to detect global deadlocks and local detectors (LDs) to detect local deadlocks. This algorithm essentially identifies transaction-resource interactions that m cause global (multisite) deadlocks. Third, a deadlock detection algorithm utilizing a transaction-wait-for (TWF) graph is presented. It is a fully disjoint algorithmmore » that allows multiple outstanding requests. The proposed algorithm can achieve improved overall performance by using multiple disjoint controllers coupled with the two-phase property while maintaining the simplicity of centralized schemes. Fourth, an algorithm that combines deadlock detection and avoidance is given. This algorithm uses concurrent transaction controllers and resource coordinators to achieve maximum distribution. The language of CSP is used to describe this algorithm. Finally, two efficient deadlock resolution protocols are given along with some guidelines to be used in choosing a transaction for abortion.« less
Revisiting negative selection algorithms.
Ji, Zhou; Dasgupta, Dipankar
2007-01-01
This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS). Following its initial model, we try to identify the fundamental characteristics of this family of algorithms and summarize their diversities. There exist various elements in this method, including data representation, coverage estimate, affinity measure, and matching rules, which are discussed for different variations. The various negative selection algorithms are categorized by different criteria as well. The relationship and possible combinations with other AIS or other machine learning methods are discussed. Prospective development and applicability of negative selection algorithms and their influence on related areas are then speculated based on the discussion.
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Shabarekh, Charlotte; Furjanic, Caitlin
2011-06-01
In this paper, we present results of adversarial activity recognition using data collected in the Empire Challenge (EC 09) exercise. The EC09 experiment provided an opportunity to evaluate our probabilistic spatiotemporal mission recognition algorithms using the data from live air-born and ground sensors. Using ambiguous and noisy data about locations of entities and motion events on the ground, the algorithms inferred the types and locations of OPFOR activities, including reconnaissance, cache runs, IED emplacements, logistics, and planning meetings. In this paper, we present detailed summary of the validation study and recognition accuracy results. Our algorithms were able to detect locations and types of over 75% of hostile activities in EC09 while producing 25% false alarms.
Improved pulse laser ranging algorithm based on high speed sampling
NASA Astrophysics Data System (ADS)
Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang
2016-10-01
Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.
First trimester PAPP-A in the detection of non-Down syndrome aneuploidy.
Ochshorn, Y; Kupferminc, M J; Wolman, I; Orr-Urtreger, A; Jaffa, A J; Yaron, Y
2001-07-01
Combined first trimester screening using pregnancy associated plasma protein-A (PAPP-A), free beta-human chorionic gonadotrophin, and nuchal translucency (NT), is currently accepted as probably the best combination for the detection of Down syndrome (DS). Current first trimester algorithms provide computed risks only for DS. However, low PAPP-A is also associated with other chromosome anomalies such as trisomy 13, 18, and sex chromosome aneuploidy. Thus, using currently available algorithms, some chromosome anomalies may not be detected. The purpose of the present study was to establish a low-end cut-off value for PAPP-A that would increase the detection rates for non-DS chromosome anomalies. The study included 1408 patients who underwent combined first trimester screening. To determine a low-end cut-off value for PAPP-A, a Receiver-Operator Characteristic (ROC) curve analysis was performed. In the entire study group there were 18 cases of chromosome anomalies (trisomy 21, 13, 18, sex chromosome anomalies), 14 of which were among screen-positive patients, a detection rate of 77.7% for all chromosome anomalies (95% CI: 55.7-99.7%). ROC curve analysis detected a statistically significant cut-off for PAPP-A at 0.25 MoM. If the definition of screen-positive were to also include patients with PAPP-A<0.25 MoM, the detection rate would increase to 88.8% for all chromosome anomalies (95% CI: 71.6-106%). This low cut-off value may be used until specific algorithms are implemented for non-Down syndrome aneuploidy. Copyright 2001 John Wiley & Sons, Ltd.
A generalised significance test for individual communities in networks.
Kojaku, Sadamori; Masuda, Naoki
2018-05-09
Many empirical networks have community structure, in which nodes are densely interconnected within each community (i.e., a group of nodes) and sparsely across different communities. Like other local and meso-scale structure of networks, communities are generally heterogeneous in various aspects such as the size, density of edges, connectivity to other communities and significance. In the present study, we propose a method to statistically test the significance of individual communities in a given network. Compared to the previous methods, the present algorithm is unique in that it accepts different community-detection algorithms and the corresponding quality function for single communities. The present method requires that a quality of each community can be quantified and that community detection is performed as optimisation of such a quality function summed over the communities. Various community detection algorithms including modularity maximisation and graph partitioning meet this criterion. Our method estimates a distribution of the quality function for randomised networks to calculate a likelihood of each community in the given network. We illustrate our algorithm by synthetic and empirical networks.
Approximate Computing Techniques for Iterative Graph Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh
Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with lowmore » impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.« less
Comparison of public peak detection algorithms for MALDI mass spectrometry data analysis.
Yang, Chao; He, Zengyou; Yu, Weichuan
2009-01-06
In mass spectrometry (MS) based proteomic data analysis, peak detection is an essential step for subsequent analysis. Recently, there has been significant progress in the development of various peak detection algorithms. However, neither a comprehensive survey nor an experimental comparison of these algorithms is yet available. The main objective of this paper is to provide such a survey and to compare the performance of single spectrum based peak detection methods. In general, we can decompose a peak detection procedure into three consequent parts: smoothing, baseline correction and peak finding. We first categorize existing peak detection algorithms according to the techniques used in different phases. Such a categorization reveals the differences and similarities among existing peak detection algorithms. Then, we choose five typical peak detection algorithms to conduct a comprehensive experimental study using both simulation data and real MALDI MS data. The results of comparison show that the continuous wavelet-based algorithm provides the best average performance.
Application of the Trend Filtering Algorithm for Photometric Time Series Data
NASA Astrophysics Data System (ADS)
Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.
2016-08-01
Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.
Automatic parameter selection for feature-based multi-sensor image registration
NASA Astrophysics Data System (ADS)
DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan
2006-05-01
Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.
Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A
2018-01-01
Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.
A Real-Time System for Lane Detection Based on FPGA and DSP
NASA Astrophysics Data System (ADS)
Xiao, Jing; Li, Shutao; Sun, Bin
2016-12-01
This paper presents a real-time lane detection system including edge detection and improved Hough Transform based lane detection algorithm and its hardware implementation with field programmable gate array (FPGA) and digital signal processor (DSP). Firstly, gradient amplitude and direction information are combined to extract lane edge information. Then, the information is used to determine the region of interest. Finally, the lanes are extracted by using improved Hough Transform. The image processing module of the system consists of FPGA and DSP. Particularly, the algorithms implemented in FPGA are working in pipeline and processing in parallel so that the system can run in real-time. In addition, DSP realizes lane line extraction and display function with an improved Hough Transform. The experimental results show that the proposed system is able to detect lanes under different road situations efficiently and effectively.
Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks
2014-01-01
Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226
2012 HIV Diagnostics Conference: the molecular diagnostics perspective.
Branson, Bernard M; Pandori, Mark
2013-04-01
2012 HIV Diagnostic Conference Atlanta, GA, USA, 12-14 December 2012. This report highlights the presentations and discussions from the 2012 National HIV Diagnostic Conference held in Atlanta (GA, USA), on 12-14 December 2012. Reflecting changes in the evolving field of HIV diagnostics, the conference provided a forum for evaluating developments in molecular diagnostics and their role in HIV diagnosis. In 2010, the HIV Diagnostics Conference concluded with the proposal of a new diagnostic algorithm which included nucleic acid testing to resolve discordant screening and supplemental antibody test results. The 2012 meeting, picking up where the 2010 meeting left off, focused on scientific presentations that assessed this new algorithm and the role played by RNA testing and new developments in molecular diagnostics, including detection of total and integrated HIV-1 DNA, detection and quantification of HIV-2 RNA, and rapid formats for detection of HIV-1 RNA.
Automated Recognition of 3D Features in GPIR Images
NASA Technical Reports Server (NTRS)
Park, Han; Stough, Timothy; Fijany, Amir
2007-01-01
A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.
Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin
2013-01-01
Automated image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of breast cancer. Automated segmentation of the cells comprising imaged tissue microarrays (TMA) is a prerequisite for any subsequent quantitative analysis. Unfortunately, crowding and overlapping of cells present significant challenges for most traditional segmentation algorithms. In this paper, we propose a novel algorithm which can reliably separate touching cells in hematoxylin stained breast TMA specimens which have been acquired using a standard RGB camera. The algorithm is composed of two steps. It begins with a fast, reliable object center localization approach which utilizes single-path voting followed by mean-shift clustering. Next, the contour of each cell is obtained using a level set algorithm based on an interactive model. We compared the experimental results with those reported in the most current literature. Finally, performance was evaluated by comparing the pixel-wise accuracy provided by human experts with that produced by the new automated segmentation algorithm. The method was systematically tested on 234 image patches exhibiting dense overlap and containing more than 2200 cells. It was also tested on whole slide images including blood smears and tissue microarrays containing thousands of cells. Since the voting step of the seed detection algorithm is well suited for parallelization, a parallel version of the algorithm was implemented using graphic processing units (GPU) which resulted in significant speed-up over the C/C++ implementation. PMID:22167559
NASA Astrophysics Data System (ADS)
Bal, A.; Alam, M. S.; Aslan, M. S.
2006-05-01
Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.
Adaboost multi-view face detection based on YCgCr skin color model
NASA Astrophysics Data System (ADS)
Lan, Qi; Xu, Zhiyong
2016-09-01
Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.
NASA Astrophysics Data System (ADS)
Merk, D.; Zinner, T.
2013-02-01
In this paper a new detection scheme for Convective Initation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting Convective Initation with geostationary satellite data and uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one High Resolution Visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims for identifying the typical development of quickly developing convective cells in an early stage. The different criteria include timetrends of the 10.8 IR channel and IR channel differences as well as their timetrends. To provide the trend fields an optical flow based method is used, the Pyramidal Matching algorithm which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM and is verified for seven days which comprise different weather situations in Central Europe. Contrasted with the original early stage detection scheme of Cb-TRAM skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for synoptic conditions with upper cold air masses triggering convection.
NASA Astrophysics Data System (ADS)
Merk, D.; Zinner, T.
2013-08-01
In this paper a new detection scheme for convective initiation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting CI with geostationary satellite data. It uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared (IR) criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one high-resolution visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims to identify the typical development of quickly developing convective cells in an early stage. The different criteria include time trends of the 10.8 IR channel, and IR channel differences, as well as their time trends. To provide the trend fields an optical-flow-based method is used: the pyramidal matching algorithm, which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM, and is verified for seven days which comprise different weather situations in central Europe. Contrasted with the original early-stage detection scheme of Cb-TRAM, skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for the synoptic class "cold air" masses.
Continuous Change Detection and Classification (CCDC) of Land Cover Using All Available Landsat Data
NASA Astrophysics Data System (ADS)
Zhu, Z.; Woodcock, C. E.
2012-12-01
A new algorithm for Continuous Change Detection and Classification (CCDC) of land cover using all available Landsat data is developed. This new algorithm is capable of detecting many kinds of land cover change as new images are collected and at the same time provide land cover maps for any given time. To better identify land cover change, a two step cloud, cloud shadow, and snow masking algorithm is used for eliminating "noisy" observations. Next, a time series model that has components of seasonality, trend, and break estimates the surface reflectance and temperature. The time series model is updated continuously with newly acquired observations. Due to the high variability in spectral response for different kinds of land cover change, the CCDC algorithm uses a data-driven threshold derived from all seven Landsat bands. When the difference between observed and predicted exceeds the thresholds three consecutive times, a pixel is identified as land cover change. Land cover classification is done after change detection. Coefficients from the time series models and the Root Mean Square Error (RMSE) from model fitting are used as classification inputs for the Random Forest Classifier (RFC). We applied this new algorithm for one Landsat scene (Path 12 Row 31) that includes all of Rhode Island as well as much of Eastern Massachusetts and parts of Connecticut. A total of 532 Landsat images acquired between 1982 and 2011 were processed. During this period, 619,924 pixels were detected to change once (91% of total changed pixels) and 60,199 pixels were detected to change twice (8% of total changed pixels). The most frequent land cover change category is from mixed forest to low density residential which occupies more than 8% of total land cover change pixels.
Caillet, P; Oberlin, P; Monnet, E; Guillon-Grammatico, L; Métral, P; Belhassen, M; Denier, P; Banaei-Bouchareb, L; Viprey, M; Biau, D; Schott, A-M
2017-10-01
Osteoporotic hip fractures (OHF) are associated with significant morbidity and mortality. The French medico-administrative database (SNIIRAM) offers an interesting opportunity to improve the management of OHF. However, the validity of studies conducted with this database relies heavily on the quality of the algorithm used to detect OHF. The aim of the REDSIAM network is to facilitate the use of the SNIIRAM database. The main objective of this study was to present and discuss several OHF-detection algorithms that could be used with this database. A non-systematic literature search was performed. The Medline database was explored during the period January 2005-August 2016. Furthermore, a snowball search was then carried out from the articles included and field experts were contacted. The extraction was conducted using the chart developed by the REDSIAM network's "Methodology" task force. The ICD-10 codes used to detect OHF are mainly S72.0, S72.1, and S72.2. The performance of these algorithms is at best partially validated. Complementary use of medical and surgical procedure codes would affect their performance. Finally, few studies described how they dealt with fractures of non-osteoporotic origin, re-hospitalization, and potential contralateral fracture cases. Authors in the literature encourage the use of ICD-10 codes S72.0 to S72.2 to develop algorithms for OHF detection. These are the codes most frequently used for OHF in France. Depending on the study objectives, other ICD10 codes and medical and surgical procedures could be usefully discussed for inclusion in the algorithm. Detection and management of duplicates and non-osteoporotic fractures should be considered in the process. Finally, when a study is based on such an algorithm, all these points should be precisely described in the publication. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Trepte, Qing; Minnis, Patrick; Sun-Mack, Sunny; Trepte, Charles
Clouds and aerosol play important roles in the global climate system. Accurately detecting their presence, altitude, and properties using satellite radiance measurements is a crucial first step in determining their influence on surface and top-of-atmosphere radiative fluxes. This paper presents a comparison analysis of a new version of the Clouds and Earth's Radiant Energy System (CERES) Edition 3 cloud detection algorithms using Aqua MODIS data with the recently released Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) Version 2 Vertical Feature Mask (VFM). Improvements in CERES Edition 3 cloud mask include dust detection, thin cirrus tests, enhanced low cloud detection at night, and a smoother transition from mid-latitude to polar regions. For the CALIPSO Version 2 data set, changes to the lidar calibration can result in significant improvements to its identification of optically thick aerosol layers. The Aqua and CALIPSO satellites, part of the A-train satellite constellation, provide a unique opportunity for validating passive sensor cloud and aerosol detection using an active sensor. In this paper, individual comparison cases will be discussed for different types of clouds and aerosols over various surfaces, for daytime and nighttime conditions, and for regions ranging from the tropics to the poles. Examples will include an assessment of the CERES detection algorithm for optically thin cirrus, marine stratus, and polar night clouds as well as its ability to characterize Saharan dust plumes off the African coast. With the CALIPSO lidar's unique ability to probe the vertical structure of clouds and aerosol layers, it provides an excellent validation data set for cloud detection algorithms, especially for polar nighttime clouds.
An effective and efficient compression algorithm for ECG signals with irregular periods.
Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son
2006-06-01
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.
Two novel motion-based algorithms for surveillance video analysis on embedded platforms
NASA Astrophysics Data System (ADS)
Vijverberg, Julien A.; Loomans, Marijn J. H.; Koeleman, Cornelis J.; de With, Peter H. N.
2010-05-01
This paper proposes two novel motion-vector based techniques for target detection and target tracking in surveillance videos. The algorithms are designed to operate on a resource-constrained device, such as a surveillance camera, and to reuse the motion vectors generated by the video encoder. The first novel algorithm for target detection uses motion vectors to construct a consistent motion mask, which is combined with a simple background segmentation technique to obtain a segmentation mask. The second proposed algorithm aims at multi-target tracking and uses motion vectors to assign blocks to targets employing five features. The weights of these features are adapted based on the interaction between targets. These algorithms are combined in one complete analysis application. The performance of this application for target detection has been evaluated for the i-LIDS sterile zone dataset and achieves an F1-score of 0.40-0.69. The performance of the analysis algorithm for multi-target tracking has been evaluated using the CAVIAR dataset and achieves an MOTP of around 9.7 and MOTA of 0.17-0.25. On a selection of targets in videos from other datasets, the achieved MOTP and MOTA are 8.8-10.5 and 0.32-0.49 respectively. The execution time on a PC-based platform is 36 ms. This includes the 20 ms for generating motion vectors, which are also required by the video encoder.
NASA Astrophysics Data System (ADS)
Schmidl, Marius
2017-04-01
We present a comprehensive training data set covering a large range of atmospheric conditions, including disperse volcanic ash and desert dust layers. These data sets contain all information required for the development of volcanic ash detection algorithms based on artificial neural networks, urgently needed since volcanic ash in the airspace is a major concern of aviation safety authorities. Selected parts of the data are used to train the volcanic ash detection algorithm VADUGS. They contain atmospheric and surface-related quantities as well as the corresponding simulated satellite data for the channels in the infrared spectral range of the SEVIRI instrument on board MSG-2. To get realistic results, ECMWF, IASI-based, and GEOS-Chem data are used to calculate all parameters describing the environment, whereas the software package libRadtran is used to perform radiative transfer simulations returning the brightness temperatures for each atmospheric state. As optical properties are a prerequisite for radiative simulations accounting for aerosol layers, the development also included the computation of optical properties for a set of different aerosol types from different sources. A description of the developed software and the used methods is given, besides an overview of the resulting data sets.
An Efficient Reachability Analysis Algorithm
NASA Technical Reports Server (NTRS)
Vatan, Farrokh; Fijany, Amir
2008-01-01
A document discusses a new algorithm for generating higher-order dependencies for diagnostic and sensor placement analysis when a system is described with a causal modeling framework. This innovation will be used in diagnostic and sensor optimization and analysis tools. Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in-situ platforms. This algorithm will serve as a power tool for technologies that satisfy a key requirement of autonomous spacecraft, including science instruments and in-situ missions.
NASA Astrophysics Data System (ADS)
Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan
2018-03-01
False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.
NASA Astrophysics Data System (ADS)
Martinez-Gutierrez, Genaro
Baja California Sur (Mexico), as well as mainland Mexico, is affected by tropical cyclone storms, which originate in the eastern north Pacific. Historical records show that Baja has been damaged by intense summer storms. An arid to semiarid climate characterizes the study area, where precipitation mainly occurs during the summer and winter seasons. Natural and anthropogenic changes have impacted the landscape of southern Baja. The present research documents the effects of tropical storms over the southern region of Baja California for a period of approximately twenty-six years. The goal of the research is to demonstrate how remote sensing can be used to detect the important effects of tropical storms including: (a) evaluation of change detection algorithms, and (b) delineating changes to the landscape including coastal modification, fluvial erosion and deposition, vegetation change, river avulsion using change detection algorithms. Digital image processing methods with temporal Landsat satellite remotely sensed data from the North America Landscape Characterization archive (NALC), Thematic Mapper (TM), and Enhanced Thematic Mapper (ETM) images were used to document the landscape change. Two image processing methods were tested including Image differencing (ID), and Principal Component Analysis (PCA). Landscape changes identified with the NALC archive and TM images showed that the major changes included a rapid change of land use in the towns of San Jose del Cabo and Cabo San Lucas between 1973 and 1986. The features detected using the algorithms included flood deposits within the channels of active streams, erosion banks, and new channels caused by channel avulsion. Despite the 19 year period covered by the NALC data and approximately 10 year intervals between acquisition dates, there were changed features that could be identified in the images. The TM images showed that flooding from Hurricane Isis (1998) produced new large deposits within the stream channels. This research has shown that remote sensing based change detection can delineate the effects of flooding on the landscape at scales down to the nominal resolution of the sensor. These findings indicate that many other applications for change detection are both viable and important. These include disaster response, flood hazard planning, geomorphic studies, water supply management in deserts.
Automatic Earthquake Detection by Active Learning
NASA Astrophysics Data System (ADS)
Bergen, K.; Beroza, G. C.
2017-12-01
In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-03-07
LC-IMS-MS Feature Finder is a command line software application which searches for possible molecular ion signatures in multidimensional liquid chromatography, ion mobility spectrometry, and mass spectrometry data by clustering deisotoped peaks with similar monoisotopic mass values, charge states, elution times, and drift times. The software application includes an algorithm for detecting multiple conformations and co-eluting species in the ion mobility dimension. LC-IMS-MS Feature Finder is designed to create an output file with detected features that includes associated information about the detected features.
Jeppesen, J; Beniczky, S; Fuglsang Frederiksen, A; Sidenius, P; Johansen, P
2017-07-01
Earlier studies have shown that short term heart rate variability (HRV) analysis of ECG seems promising for detection of epileptic seizures. A precise and accurate automatic R-peak detection algorithm is a necessity in a real-time, continuous measurement of HRV, in a portable ECG device. We used the portable CE marked ePatch® heart monitor to record the ECG of 14 patients, who were enrolled in the videoEEG long term monitoring unit for clinical workup of epilepsy. Recordings of the first 7 patients were used as training set of data for the R-peak detection algorithm and the recordings of the last 7 patients (467.6 recording hours) were used to test the performance of the algorithm. We aimed to modify an existing QRS-detection algorithm to a more precise R-peak detection algorithm to avoid the possible jitter Qand S-peaks can create in the tachogram, which causes error in short-term HRVanalysis. The proposed R-peak detection algorithm showed a high sensitivity (Se = 99.979%) and positive predictive value (P+ = 99.976%), which was comparable with a previously published QRS-detection algorithm for the ePatch® ECG device, when testing the same dataset. The novel R-peak detection algorithm designed to avoid jitter has very high sensitivity and specificity and thus is a suitable tool for a robust, fast, real-time HRV-analysis in patients with epilepsy, creating the possibility for real-time seizure detection for these patients.
Pattern recognition for passive polarimetric data using nonparametric classifiers
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.
2005-08-01
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
Elbert, Yevgeniy; Burkom, Howard S
2009-11-20
This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.
[Algorithm of toxigenic genetically altered Vibrio cholerae El Tor biovar strain identification].
Smirnova, N I; Agafonov, D A; Zadnova, S P; Cherkasov, A V; Kutyrev, V V
2014-01-01
Development of an algorithm of genetically altered Vibrio cholerae biovar El Tor strai identification that ensures determination of serogroup, serovar and biovar of the studied isolate based on pheno- and genotypic properties, detection of genetically altered cholera El Tor causative agents, their differentiation by epidemic potential as well as evaluation of variability of key pathogenicity genes. Complex analysis of 28 natural V. cholerae strains was carried out by using traditional microbiological methods, PCR and fragmentary sequencing. An algorithm of toxigenic genetically altered V. cholerae biovar El Tor strain identification was developed that includes 4 stages: determination of serogroup, serovar and biovar based on phenotypic properties, confirmation of serogroup and biovar based on molecular-genetic properties determination of strains as genetically altered, differentiation of genetically altered strains by their epidemic potential and detection of ctxB and tcpA key pathogenicity gene polymorphism. The algorithm is based on the use of traditional microbiological methods, PCR and sequencing of gene fragments. The use of the developed algorithm will increase the effectiveness of detection of genetically altered variants of the cholera El Tor causative agent, their differentiation by epidemic potential and will ensure establishment of polymorphism of genes that code key pathogenicity factors for determination of origins of the strains and possible routes of introduction of the infection.
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Brown, A.; Brown, J.
The paper describes the development and evaluation of a suite of advanced algorithms which provide significantly-improved capabilities for finding, fixing, and tracking multiple ballistic and flying low observable objects in highly stressing cluttered environments. The algorithms have been developed for use in satellite-based staring and scanning optical surveillance suites for applications including theatre and intercontinental ballistic missile early warning, trajectory prediction, and multi-sensor track handoff for midcourse discrimination and intercept. The functions performed by the algorithms include electronic sensor motion compensation providing sub-pixel stabilization (to 1/100 of a pixel), as well as advanced temporal-spatial clutter estimation and suppression to below sensor noise levels, followed by statistical background modeling and Bayesian multiple-target track-before-detect filtering. The multiple-target tracking is performed in physical world coordinates to allow for multi-sensor fusion, trajectory prediction, and intercept. Output of detected object cues and data visualization are also provided. The algorithms are designed to handle a wide variety of real-world challenges. Imaged scenes may be highly complex and infinitely varied -- the scene background may contain significant celestial, earth limb, or terrestrial clutter. For example, when viewing combined earth limb and terrestrial scenes, a combination of stationary and non-stationary clutter may be present, including cloud formations, varying atmospheric transmittance and reflectance of sunlight and other celestial light sources, aurora, glint off sea surfaces, and varied natural and man-made terrain features. The targets of interest may also appear to be dim, relative to the scene background, rendering much of the existing deployed software useless for optical target detection and tracking. Additionally, it may be necessary to detect and track a large number of objects in the threat cloud, and these objects may not always be resolvable in individual data frames. In the present paper, the performance of the developed algorithms is demonstrated using real-world data containing resident space objects observed from the MSX platform, with backgrounds varying from celestial to combined celestial and earth limb, with instances of extremely bright aurora clutter. Simulation results are also presented for parameterized variations in signal-to-clutter levels (down to 1/1000) and signal-to-noise levels (down to 1/6) for simulated targets against real-world terrestrial clutter backgrounds. We also discuss algorithm processing requirements and C++ software processing capabilities from our on-going MDA- and AFRL-sponsored development of an image processing toolkit (iPTK). In the current effort, the iPTK is being developed to a Technology Readiness Level (TRL) of 6 by mid-2010, in preparation for possible integration with STSS-like, SBIRS high-like and SBSS-like surveillance suites.
Rooijakkers, Michiel; Rabotti, Chiara; Bennebroek, Martijn; van Meerbergen, Jef; Mischi, Massimo
2011-01-01
Non-invasive fetal health monitoring during pregnancy has become increasingly important. Recent advances in signal processing technology have enabled fetal monitoring during pregnancy, using abdominal ECG recordings. Ubiquitous ambulatory monitoring for continuous fetal health measurement is however still unfeasible due to the computational complexity of noise robust solutions. In this paper an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, while increasing the R-peak detection quality compared to existing R-peak detection schemes. Validation of the algorithm is performed on two manually annotated datasets, the MIT/BIH Arrhythmia database and an in-house abdominal database. Both R-peak detection quality and computational complexity are compared to state-of-the-art algorithms as described in the literature. With a detection error rate of 0.22% and 0.12% on the MIT/BIH Arrhythmia and in-house databases, respectively, the quality of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.
Huang, Tao; Li, Xiao-yu; Xu, Meng-ling; Jin, Rui; Ku, Jing; Xu, Sen-miao; Wu, Zhen-zhong
2015-01-01
The quality of potato is directly related to their edible value and industrial value. Hollow heart of potato, as a physiological disease occurred inside the tuber, is difficult to be detected. This paper put forward a non-destructive detection method by using semi-transmission hyperspectral imaging with support vector machine (SVM) to detect hollow heart of potato. Compared to reflection and transmission hyperspectral image, semi-transmission hyperspectral image can get clearer image which contains the internal quality information of agricultural products. In this study, 224 potato samples (149 normal samples and 75 hollow samples) were selected as the research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images (390-1 040 nn) of the potato samples, and then the average spectrum of region of interest were extracted for spectral characteristics analysis. Normalize was used to preprocess the original spectrum, and prediction model were developed based on SVM using all wave bands, the accurate recognition rate of test set is only 87. 5%. In order to simplify the model competitive.adaptive reweighed sampling algorithm (CARS) and successive projection algorithm (SPA) were utilized to select important variables from the all 520 spectral variables and 8 variables were selected (454, 601, 639, 664, 748, 827, 874 and 936 nm). 94. 64% of the accurate recognition rate of test set was obtained by using the 8 variables to develop SVM model. Parameter optimization algorithms, including artificial fish swarm algorithm (AFSA), genetic algorithm (GA) and grid search algorithm, were used to optimize the SVM model parameters: penalty parameter c and kernel parameter g. After comparative analysis, AFSA, a new bionic optimization algorithm based on the foraging behavior of fish swarm, was proved to get the optimal model parameter (c=10. 659 1, g=0. 349 7), and the recognition accuracy of 10% were obtained for the AFSA-SVM model. The results indicate that combining the semi-transmission hyperspectral imaging technology with CARS-SPA and AFSA-SVM can accurately detect hollow heart of potato, and also provide technical support for rapid non-destructive detecting of hollow heart of potato.
NASA Technical Reports Server (NTRS)
Britt, Charles L.; Bracalente, Emedio M.
1992-01-01
The algorithms used in the NASA experimental wind shear radar system for detection, characterization, and determination of windshear hazard are discussed. The performance of the algorithms in the detection of wet microbursts near Orlando is presented. Various suggested algorithms that are currently being evaluated using the flight test results from Denver and Orlando are reviewed.
Detection and clustering of features in aerial images by neuron network-based algorithm
NASA Astrophysics Data System (ADS)
Vozenilek, Vit
2015-12-01
The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.
NASA Technical Reports Server (NTRS)
Platnick, Steven; King, Michael D.; Wind, Gala; Holz, Robert E.; Ackerman, Steven A.; Nagle, Fred W.
2008-01-01
CALIPSO and CloudSat, launched in June 2006, provide global active remote sensing measurements of clouds and aerosols that can be used for validation of a variety of passive imager retrievals derived from instruments flying on the Aqua spacecraft and other A-Train platforms. The most recent processing effort for the MODIS Atmosphere Team, referred to as the "Collection 5" stream, includes a research-level multilayer cloud detection algorithm that uses both thermodynamic phase information derived from a combination of solar and thermal emission bands to discriminate layers of different phases, as well as true layer separation discrimination using a moderately absorbing water vapor band. The multilayer detection algorithm is designed to provide a means of assessing the applicability of 1D cloud models used in the MODIS cloud optical and microphysical product retrieval, which are generated at a 1 h resolution. Using pixel-level collocations of MODIS Aqua, CALIOP, and CloudSat radar measurements, we investigate the global performance of the thermodynamic phase and multilayer cloud detection algorithms.
NASA Astrophysics Data System (ADS)
Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.
2018-05-01
To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].
Statistical analysis and machine learning algorithms for optical biopsy
NASA Astrophysics Data System (ADS)
Wu, Binlin; Liu, Cheng-hui; Boydston-White, Susie; Beckman, Hugh; Sriramoju, Vidyasagar; Sordillo, Laura; Zhang, Chunyuan; Zhang, Lin; Shi, Lingyan; Smith, Jason; Bailin, Jacob; Alfano, Robert R.
2018-02-01
Analyzing spectral or imaging data collected with various optical biopsy methods is often times difficult due to the complexity of the biological basis. Robust methods that can utilize the spectral or imaging data and detect the characteristic spectral or spatial signatures for different types of tissue is challenging but highly desired. In this study, we used various machine learning algorithms to analyze a spectral dataset acquired from human skin normal and cancerous tissue samples using resonance Raman spectroscopy with 532nm excitation. The algorithms including principal component analysis, nonnegative matrix factorization, and autoencoder artificial neural network are used to reduce dimension of the dataset and detect features. A support vector machine with a linear kernel is used to classify the normal tissue and cancerous tissue samples. The efficacies of the methods are compared.
Using Block-local Atomicity to Detect Stale-value Concurrency Errors
NASA Technical Reports Server (NTRS)
Artho, Cyrille; Havelund, Klaus; Biere, Armin
2004-01-01
Data races do not cover all kinds of concurrency errors. This paper presents a data-flow-based technique to find stale-value errors, which are not found by low-level and high-level data race algorithms. Stale values denote copies of shared data where the copy is no longer synchronized. The algorithm to detect such values works as a consistency check that does not require any assumptions or annotations of the program. It has been implemented as a static analysis in JNuke. The analysis is sound and requires only a single execution trace if implemented as a run-time checking algorithm. Being based on an analysis of Java bytecode, it encompasses the full program semantics, including arbitrarily complex expressions. Related techniques are more complex and more prone to over-reporting.
Blind adaptive equalization of polarization-switched QPSK modulation.
Millar, David S; Savory, Seb J
2011-04-25
Coherent detection in combination with digital signal processing has recently enabled significant progress in the capacity of optical communications systems. This improvement has enabled detection of optimum constellations for optical signals in four dimensions. In this paper, we propose and investigate an algorithm for the blind adaptive equalization of one such modulation format: polarization-switched quaternary phase shift keying (PS-QPSK). The proposed algorithm, which includes both blind initialization and adaptation of the equalizer, is found to be insensitive to the input polarization state and demonstrates highly robust convergence in the presence of PDL, DGD and polarization rotation.
The artificial object detection and current velocity measurement using SAR ocean surface images
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Strotov, Valery; Ershov, Maksim; Muraviev, Vadim; Feldman, Alexander; Smirnov, Sergey
2017-10-01
Due to the fact that water surface covers wide areas, remote sensing is the most appropriate way of getting information about ocean environment for vessel tracking, security purposes, ecological studies and others. Processing of synthetic aperture radar (SAR) images is extensively used for control and monitoring of the ocean surface. Image data can be acquired from Earth observation satellites, such as TerraSAR-X, ERS, and COSMO-SkyMed. Thus, SAR image processing can be used to solve many problems arising in this field of research. This paper discusses some of them including ship detection, oil pollution control and ocean currents mapping. Due to complexity of the problem several specialized algorithm are necessary to develop. The oil spill detection algorithm consists of the following main steps: image preprocessing, detection of dark areas, parameter extraction and classification. The ship detection algorithm consists of the following main steps: prescreening, land masking, image segmentation combined with parameter measurement, ship orientation estimation and object discrimination. The proposed approach to ocean currents mapping is based on Doppler's law. The results of computer modeling on real SAR images are presented. Based on these results it is concluded that the proposed approaches can be used in maritime applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nekoogar, F; Martz, Jr., H E
2009-09-23
The purpose of this statement of work is for third party collaborators to train, validate and have Lawrence Livermore National Security, LLC (LLNS) evaluate algorithms to detect liquid threats in digital radiography (DR)/TIP Ready X-ray (TRX) images that will be provided by LLNS through the Transportation and Security Administration (TSA). LLNS will provide a set of images with threat(s) to determine detection rates and non-threat images from airports to determine false alarm rates. A key including a bounding box showing the locations of the threats and non-threats will be provided for the images. It is expected that the Subcontractor shallmore » use half of the images with their keys for training the algorithms and the other half shall be used for validation (third party evaluation) purposes. The Subcontractor shall not use the key to the second half of the data other than for the validation and reporting of the performance of its algorithm (not for training). The Subcontractor has 45 business days from the receipt of datasets and the Subcontract to: (1) Run their detection/classification algorithms on the data; (2) Deliver a final report describing their performance by generating Receiver Operator Characteristic (ROC) curves using their algorithm; and (3) Deliver a copy of the third party's executable software (already trained and validated by the datasets) to LLNL accompanied by a user manual. LLNS will evaluate the performance of the same algorithm on another separate set of data. LLNS evaluation of the Subcontractor's algorithm will be documented in a final report within 30 days of receiving the executable code. This report will be sent to TSA and the report may be disseminated to the Subcontract at TSA's discretion.« less
NASA Astrophysics Data System (ADS)
Zhou, T.; Popescu, S. C.; Krause, K.
2016-12-01
Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.
Memetic algorithms for de novo motif-finding in biomedical sequences.
Bi, Chengpeng
2012-09-01
The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary microRNA sequences. The memetic motif-finding algorithm is effectively designed and implemented, and its applications demonstrate it is not only time-efficient, but also exhibits excellent performance while compared with other popular algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.
Reliable motion detection of small targets in video with low signal-to-clutter ratios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, S.A.; Naylor, R.B.
1995-07-01
Studies show that vigilance decreases rapidly after several minutes when human operators are required to search live video for infrequent intrusion detections. Therefore, there is a need for systems which can automatically detect targets in live video and reserve the operator`s attention for assessment only. Thus far, automated systems have not simultaneously provided adequate detection sensitivity, false alarm suppression, and ease of setup when used in external, unconstrained environments. This unsatisfactory performance can be exacerbated by poor video imagery with low contrast, high noise, dynamic clutter, image misregistration, and/or the presence of small, slow, or erratically moving targets. This papermore » describes a highly adaptive video motion detection and tracking algorithm which has been developed as part of Sandia`s Advanced Exterior Sensor (AES) program. The AES is a wide-area detection and assessment system for use in unconstrained exterior security applications. The AES detection and tracking algorithm provides good performance under stressing data and environmental conditions. Features of the algorithm include: reliable detection with negligible false alarm rate of variable velocity targets having low signal-to-clutter ratios; reliable tracking of targets that exhibit motion that is non-inertial, i.e., varies in direction and velocity; automatic adaptation to both infrared and visible imagery with variable quality; and suppression of false alarms caused by sensor flaws and/or cutouts.« less
NASA Astrophysics Data System (ADS)
Pinales, J. C.; Graber, H. C.; Hargrove, J. T.; Caruso, M. J.
2016-02-01
Previous studies have demonstrated the ability to detect and classify marine hydrocarbon films with spaceborne synthetic aperture radar (SAR) imagery. The dampening effects of hydrocarbon discharges on small surface capillary-gravity waves renders the ocean surface "radar dark" compared with the standard wind-borne ocean surfaces. Given the scope and impact of events like the Deepwater Horizon oil spill, the need for improved, automated and expedient monitoring of hydrocarbon-related marine anomalies has become a pressing and complex issue for governments and the extraction industry. The research presented here describes the development, training, and utilization of an algorithm that detects marine oil spills in an automated, semi-supervised manner, utilizing X-, C-, or L-band SAR data as the primary input. Ancillary datasets include related radar-borne variables (incidence angle, etc.), environmental data (wind speed, etc.) and textural descriptors. Shapefiles produced by an experienced human-analyst served as targets (validation) during the training portion of the investigation. Training and testing datasets were chosen for development and assessment of algorithm effectiveness as well as optimal conditions for oil detection in SAR data. The algorithm detects oil spills by following a 3-step methodology: object detection, feature extraction, and classification. Previous oil spill detection and classification methodologies such as machine learning algorithms, artificial neural networks (ANN), and multivariate classification methods like partial least squares-discriminant analysis (PLS-DA) are evaluated and compared. Statistical, transform, and model-based image texture techniques, commonly used for object mapping directly or as inputs for more complex methodologies, are explored to determine optimal textures for an oil spill detection system. The influence of the ancillary variables is explored, with a particular focus on the role of strong vs. weak wind forcing.
2015-01-01
Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377
Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.
Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen
2014-01-01
Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.
NASA Technical Reports Server (NTRS)
Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.
2010-01-01
The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods
Machine Learning Methods for Attack Detection in the Smart Grid.
Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent
2016-08-01
Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.
Low-complexity R-peak detection for ambulatory fetal monitoring.
Rooijakkers, Michael J; Rabotti, Chiara; Oei, S Guid; Mischi, Massimo
2012-07-01
Non-invasive fetal health monitoring during pregnancy is becoming increasingly important because of the increasing number of high-risk pregnancies. Despite recent advances in signal-processing technology, which have enabled fetal monitoring during pregnancy using abdominal electrocardiogram (ECG) recordings, ubiquitous fetal health monitoring is still unfeasible due to the computational complexity of noise-robust solutions. In this paper, an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, without reducing the R-peak detection performance compared to the existing R-peak detection schemes. Validation of the algorithm is performed on three manually annotated datasets. With a detection error rate of 0.23%, 1.32% and 9.42% on the MIT/BIH Arrhythmia and in-house maternal and fetal databases, respectively, the detection rate of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.
A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun
2014-11-01
In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.
A new real-time tsunami detection algorithm
NASA Astrophysics Data System (ADS)
Chierici, F.; Embriaco, D.; Pignagnoli, L.
2016-12-01
Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection based on the real-time tide removal and real-time band-pass filtering of sea-bed pressure recordings. The algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability, at low computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. Pressure data sets acquired by Bottom Pressure Recorders in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event which occurred at Haida Gwaii on October 28th, 2012 using data recorded by the Bullseye underwater node of Ocean Networks Canada. The algorithm successfully ran for test purpose in year-long missions onboard the GEOSTAR stand-alone multidisciplinary abyssal observatory, deployed in the Gulf of Cadiz during the EC project NEAREST and on NEMO-SN1 cabled observatory deployed in the Western Ionian Sea, operational node of the European research infrastructure EMSO.
NASA Astrophysics Data System (ADS)
Lin, Y. H.; Bai, R.; Qian, Z. H.
2018-03-01
Vehicle detection systems are applied to obtain real-time information of vehicles, realize traffic control and reduce traffic pressure. This paper reviews geomagnetic sensors as well as the research status of the vehicle detection system. Presented in the paper are also our work on the vehicle detection system, including detection algorithms and experimental results. It is found that the GMR based vehicle detection system has a detection accuracy up to 98% with a high potential for application in the road traffic control area.
Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P
2010-10-30
Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.
2018-01-01
ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a
2011-01-01
Background Gene regulatory networks play essential roles in living organisms to control growth, keep internal metabolism running and respond to external environmental changes. Understanding the connections and the activity levels of regulators is important for the research of gene regulatory networks. While relevance score based algorithms that reconstruct gene regulatory networks from transcriptome data can infer genome-wide gene regulatory networks, they are unfortunately prone to false positive results. Transcription factor activities (TFAs) quantitatively reflect the ability of the transcription factor to regulate target genes. However, classic relevance score based gene regulatory network reconstruction algorithms use models do not include the TFA layer, thus missing a key regulatory element. Results This work integrates TFA prediction algorithms with relevance score based network reconstruction algorithms to reconstruct gene regulatory networks with improved accuracy over classic relevance score based algorithms. This method is called Gene expression and Transcription factor activity based Relevance Network (GTRNetwork). Different combinations of TFA prediction algorithms and relevance score functions have been applied to find the most efficient combination. When the integrated GTRNetwork method was applied to E. coli data, the reconstructed genome-wide gene regulatory network predicted 381 new regulatory links. This reconstructed gene regulatory network including the predicted new regulatory links show promising biological significances. Many of the new links are verified by known TF binding site information, and many other links can be verified from the literature and databases such as EcoCyc. The reconstructed gene regulatory network is applied to a recent transcriptome analysis of E. coli during isobutanol stress. In addition to the 16 significantly changed TFAs detected in the original paper, another 7 significantly changed TFAs have been detected by using our reconstructed network. Conclusions The GTRNetwork algorithm introduces the hidden layer TFA into classic relevance score-based gene regulatory network reconstruction processes. Integrating the TFA biological information with regulatory network reconstruction algorithms significantly improves both detection of new links and reduces that rate of false positives. The application of GTRNetwork on E. coli gene transcriptome data gives a set of potential regulatory links with promising biological significance for isobutanol stress and other conditions. PMID:21668997
Estimation of Temporal Gait Parameters Using a Wearable Microphone-Sensor-Based System
Wang, Cheng; Wang, Xiangdong; Long, Zhou; Yuan, Jing; Qian, Yueliang; Li, Jintao
2016-01-01
Most existing wearable gait analysis methods focus on the analysis of data obtained from inertial sensors. This paper proposes a novel, low-cost, wireless and wearable gait analysis system which uses microphone sensors to collect footstep sound signals during walking. This is the first time a microphone sensor is used as a wearable gait analysis device as far as we know. Based on this system, a gait analysis algorithm for estimating the temporal parameters of gait is presented. The algorithm fully uses the fusion of two feet footstep sound signals and includes three stages: footstep detection, heel-strike event and toe-on event detection, and calculation of gait temporal parameters. Experimental results show that with a total of 240 data sequences and 1732 steps collected using three different gait data collection strategies from 15 healthy subjects, the proposed system achieves an average 0.955 F1-measure for footstep detection, an average 94.52% accuracy rate for heel-strike detection and 94.25% accuracy rate for toe-on detection. Using these detection results, nine temporal related gait parameters are calculated and these parameters are consistent with their corresponding normal gait temporal parameters and labeled data calculation results. The results verify the effectiveness of our proposed system and algorithm for temporal gait parameter estimation. PMID:27999321
Irsch, Kristina; Gramatikov, Boris; Wu, Yi-Kai; Guyton, David
2011-01-01
Utilizing the measured corneal birefringence from a data set of 150 eyes of 75 human subjects, an algorithm and related computer program, based on Müller-Stokes matrix calculus, were developed in MATLAB for assessing the influence of corneal birefringence on retinal birefringence scanning (RBS) and for converging upon an optical/mechanical design using wave plates (“wave-plate-enhanced RBS”) that allows foveal fixation detection essentially independently of corneal birefringence. The RBS computer model, and in particular the optimization algorithm, were verified with experimental human data using an available monocular RBS-based eye fixation monitor. Fixation detection using wave-plate-enhanced RBS is adaptable to less cooperative subjects, including young children at risk for developing amblyopia. PMID:21750772
Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study
NASA Astrophysics Data System (ADS)
Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad
2018-01-01
The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.
NASA Technical Reports Server (NTRS)
Wind, Galina (Gala); Platnick, Steven; Riedi, Jerome
2011-01-01
The MODIS cloud optical properties algorithm (MOD06IMYD06 for Terra and Aqua MODIS, respectively) slated for production in Data Collection 6 has been adapted to execute using available channels on MSG SEVIRI. Available MODIS-style retrievals include IR Window-derived cloud top properties, using the new Collection 6 cloud top properties algorithm, cloud optical thickness from VISINIR bands, cloud effective radius from 1.6 and 3.7Jlm and cloud ice/water path. We also provide pixel-level uncertainty estimate for successful retrievals. It was found that at nighttime the SEVIRI cloud mask tends to report unnaturally low cloud fraction for marine stratocumulus clouds. A correction algorithm that improves detection of such clouds has been developed. We will discuss the improvements to nighttime low cloud detection for SEVIRI and show examples and comparisons with MODIS and CALIPSO. We will also show examples of MODIS-style pixel-level (Level-2) cloud retrievals for SEVIRI with comparisons to MODIS.
Implementation of a Smart Phone for Motion Analysis.
Yodpijit, Nantakrit; Songwongamarit, Chalida; Tavichaiyuth, Nicha
2015-01-01
In todays information-rich environment, one of the most popular devices is a smartphone. Research has shown significant growth in the use of smartphones and apps all over the world. Accelerometer within smartphone is a motion sensor that can be used to detect human movements. Compared to other major vital signs, gait characteristics represent general health status, and can be determined using smartphones. The objective of the current study is to design and develop the alternative technology that can potentially predict health status and reduce healthcare cost. This study uses a smartphone as a wireless accelerometer for quantifying human motion characteristics from four steps of the system design and development (data acquisition operation, feature extraction algorithm, classifier design, and decision making strategy). Findings indicate that it is possible to extract features from a smartphones accelerometer using a peak detection algorithm. Gait characteristics obtain from the peak detection algorithm include stride time, stance time, swing time and cadence. Applications and limitations of this study are also discussed.
NASA Astrophysics Data System (ADS)
Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng
2007-11-01
As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.
Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF
NASA Astrophysics Data System (ADS)
Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James
A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.
Carreer, William J.; Flight, Robert M.; Moseley, Hunter N. B.
2013-01-01
New metabolomics applications of ultra-high resolution and accuracy mass spectrometry can provide thousands of detectable isotopologues, with the number of potentially detectable isotopologues increasing exponentially with the number of stable isotopes used in newer isotope tracing methods like stable isotope-resolved metabolomics (SIRM) experiments. This huge increase in usable data requires software capable of correcting the large number of isotopologue peaks resulting from SIRM experiments in a timely manner. We describe the design of a new algorithm and software system capable of handling these high volumes of data, while including quality control methods for maintaining data quality. We validate this new algorithm against a previous single isotope correction algorithm in a two-step cross-validation. Next, we demonstrate the algorithm and correct for the effects of natural abundance for both 13C and 15N isotopes on a set of raw isotopologue intensities of UDP-N-acetyl-D-glucosamine derived from a 13C/15N-tracing experiment. Finally, we demonstrate the algorithm on a full omics-level dataset. PMID:24404440
Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.; Schultz, Peter F.; George, John S.
2015-07-28
An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
Spectrum sensing algorithm based on autocorrelation energy in cognitive radio networks
NASA Astrophysics Data System (ADS)
Ren, Shengwei; Zhang, Li; Zhang, Shibing
2016-10-01
Cognitive radio networks have wide applications in the smart home, personal communications and other wireless communication. Spectrum sensing is the main challenge in cognitive radios. This paper proposes a new spectrum sensing algorithm which is based on the autocorrelation energy of signal received. By taking the autocorrelation energy of the received signal as the statistics of spectrum sensing, the effect of the channel noise on the detection performance is reduced. Simulation results show that the algorithm is effective and performs well in low signal-to-noise ratio. Compared with the maximum generalized eigenvalue detection (MGED) algorithm, function of covariance matrix based detection (FMD) algorithm and autocorrelation-based detection (AD) algorithm, the proposed algorithm has 2 11 dB advantage.
Fire detection system using random forest classification for image sequences of complex background
NASA Astrophysics Data System (ADS)
Kim, Onecue; Kang, Dong-Joong
2013-06-01
We present a fire alarm system based on image processing that detects fire accidents in various environments. To reduce false alarms that frequently appeared in earlier systems, we combined image features including color, motion, and blinking information. We specifically define the color conditions of fires in hue, saturation and value, and RGB color space. Fire features are represented as intensity variation, color mean and variance, motion, and image differences. Moreover, blinking fire features are modeled by using crossing patches. We propose an algorithm that classifies patches into fire or nonfire areas by using random forest supervised learning. We design an embedded surveillance device made with acrylonitrile butadiene styrene housing for stable fire detection in outdoor environments. The experimental results show that our algorithm works robustly in complex environments and is able to detect fires in real time.
Lining seam elimination algorithm and surface crack detection in concrete tunnel lining
NASA Astrophysics Data System (ADS)
Qu, Zhong; Bai, Ling; An, Shi-Quan; Ju, Fang-Rong; Liu, Ling
2016-11-01
Due to the particularity of the surface of concrete tunnel lining and the diversity of detection environments such as uneven illumination, smudges, localized rock falls, water leakage, and the inherent seams of the lining structure, existing crack detection algorithms cannot detect real cracks accurately. This paper proposed an algorithm that combines lining seam elimination with the improved percolation detection algorithm based on grid cell analysis for surface crack detection in concrete tunnel lining. First, check the characteristics of pixels within the overlapping grid to remove the background noise and generate the percolation seed map (PSM). Second, cracks are detected based on the PSM by the accelerated percolation algorithm so that the fracture unit areas can be scanned and connected. Finally, the real surface cracks in concrete tunnel lining can be obtained by removing the lining seam and performing percolation denoising. Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks. Furthermore, it can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.
Li, Zhixi; He, Yifan; Keel, Stuart; Meng, Wei; Chang, Robert T; He, Mingguang
2018-03-02
To assess the performance of a deep learning algorithm for detecting referable glaucomatous optic neuropathy (GON) based on color fundus photographs. A deep learning system for the classification of GON was developed for automated classification of GON on color fundus photographs. We retrospectively included 48 116 fundus photographs for the development and validation of a deep learning algorithm. This study recruited 21 trained ophthalmologists to classify the photographs. Referable GON was defined as vertical cup-to-disc ratio of 0.7 or more and other typical changes of GON. The reference standard was made until 3 graders achieved agreement. A separate validation dataset of 8000 fully gradable fundus photographs was used to assess the performance of this algorithm. The area under receiver operator characteristic curve (AUC) with sensitivity and specificity was applied to evaluate the efficacy of the deep learning algorithm detecting referable GON. In the validation dataset, this deep learning system achieved an AUC of 0.986 with sensitivity of 95.6% and specificity of 92.0%. The most common reasons for false-negative grading (n = 87) were GON with coexisting eye conditions (n = 44 [50.6%]), including pathologic or high myopia (n = 37 [42.6%]), diabetic retinopathy (n = 4 [4.6%]), and age-related macular degeneration (n = 3 [3.4%]). The leading reason for false-positive results (n = 480) was having other eye conditions (n = 458 [95.4%]), mainly including physiologic cupping (n = 267 [55.6%]). Misclassification as false-positive results amidst a normal-appearing fundus occurred in only 22 eyes (4.6%). A deep learning system can detect referable GON with high sensitivity and specificity. Coexistence of high or pathologic myopia is the most common cause resulting in false-negative results. Physiologic cupping and pathologic myopia were the most common reasons for false-positive results. Copyright © 2018 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
A community detection algorithm based on structural similarity
NASA Astrophysics Data System (ADS)
Guo, Xuchao; Hao, Xia; Liu, Yaqiong; Zhang, Li; Wang, Lu
2017-09-01
In order to further improve the efficiency and accuracy of community detection algorithm, a new algorithm named SSTCA (the community detection algorithm based on structural similarity with threshold) is proposed. In this algorithm, the structural similarities are taken as the weights of edges, and the threshold k is considered to remove multiple edges whose weights are less than the threshold, and improve the computational efficiency. Tests were done on the Zachary’s network, Dolphins’ social network and Football dataset by the proposed algorithm, and compared with GN and SSNCA algorithm. The results show that the new algorithm is superior to other algorithms in accuracy for the dense networks and the operating efficiency is improved obviously.
Data Mining: The Art of Automated Knowledge Extraction
NASA Astrophysics Data System (ADS)
Karimabadi, H.; Sipes, T.
2012-12-01
Data mining algorithms are used routinely in a wide variety of fields and they are gaining adoption in sciences. The realities of real world data analysis are that (a) data has flaws, and (b) the models and assumptions that we bring to the data are inevitably flawed, and/or biased and misspecified in some way. Data mining can improve data analysis by detecting anomalies in the data, check for consistency of the user model assumptions, and decipher complex patterns and relationships that would not be possible otherwise. The common form of data collected from in situ spacecraft measurements is multi-variate time series which represents one of the most challenging problems in data mining. We have successfully developed algorithms to deal with such data and have extended the algorithms to handle streaming data. In this talk, we illustrate the utility of our algorithms through several examples including automated detection of reconnection exhausts in the solar wind and flux ropes in the magnetotail. We also show examples from successful applications of our technique to analysis of 3D kinetic simulations. With an eye to the future, we provide an overview of our upcoming plans that include collaborative data mining, expert outsourcing data mining, computer vision for image analysis, among others. Finally, we discuss the integration of data mining algorithms with web-based services such as VxOs and other Heliophysics data centers and the resulting capabilities that it would enable.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan
2016-03-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.
Extraction and classification of 3D objects from volumetric CT data
NASA Astrophysics Data System (ADS)
Song, Samuel M.; Kwon, Junghyun; Ely, Austin; Enyeart, John; Johnson, Chad; Lee, Jongkyu; Kim, Namho; Boyd, Douglas P.
2016-05-01
We propose an Automatic Threat Detection (ATD) algorithm for Explosive Detection System (EDS) using our multistage Segmentation Carving (SC) followed by Support Vector Machine (SVM) classifier. The multi-stage Segmentation and Carving (SC) step extracts all suspect 3-D objects. The feature vector is then constructed for all extracted objects and the feature vector is classified by the Support Vector Machine (SVM) previously learned using a set of ground truth threat and benign objects. The learned SVM classifier has shown to be effective in classification of different types of threat materials. The proposed ATD algorithm robustly deals with CT data that are prone to artifacts due to scatter, beam hardening as well as other systematic idiosyncrasies of the CT data. Furthermore, the proposed ATD algorithm is amenable for including newly emerging threat materials as well as for accommodating data from newly developing sensor technologies. Efficacy of the proposed ATD algorithm with the SVM classifier is demonstrated by the Receiver Operating Characteristics (ROC) curve that relates Probability of Detection (PD) as a function of Probability of False Alarm (PFA). The tests performed using CT data of passenger bags shows excellent performance characteristics.
Investigation of Coherent and Incoherent Change Detection Algorithms
2017-12-01
Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE December...Data Management System (SDMS) in order to compare the various change detection techniques. These change detection methods include the following: a...SAR) is presented in this thesis. This investigation utilizes data gathered from the Air Force Research Laboratory (AFRL) Sensor Data Management
Detection of dominant flow and abnormal events in surveillance video
NASA Astrophysics Data System (ADS)
Kwak, Sooyeong; Byun, Hyeran
2011-02-01
We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.
Characterizing volcanic activity: Application of freely-available webcams
NASA Astrophysics Data System (ADS)
Dehn, J.; Harrild, M.; Webley, P. W.
2017-12-01
In recent years, freely-available web-based cameras, or webcams, have become more readily available allowing an increased level of monitoring at active volcanoes across the globe. While these cameras have been extensively used as qualitative tools, they provide a unique dataset to perform quantitative analyzes of the changing behavior of the particular volcano within the cameras field of view. We focus on the multitude of these freely-available webcams and present a new algorithm to detect changes in volcanic activity using nighttime webcam data. Our approach uses a quick, efficient, and fully automated algorithm to identify changes in webcam data in near real-time, including techniques such as edge detection, Gaussian mixture models, and temporal/spatial statistical tests, which are applied to each target image. Often the image metadata (exposure, gain settings, aperture, focal length, etc.) are unknown, meaning we developed our algorithm to identify the quantity of volcanically incandescent pixels as well as the number of specific algorithm tests needed to detect thermal activity, instead of directly correlating brightness in the webcam to eruption temperatures. We compared our algorithm results to a manual analysis of webcam data for several volcanoes and determined a false detection rate of less than 3% for the automated approach. In our presentation, we describe the different tests integrated into our algorithm, lessons learned, and how we applied our method to several volcanoes across the North Pacific during its development and implementation. We will finish with a discussion on the global applicability of our approach and how to build a 24/7, 365 day a year tool that can be used as an additional data source for real-time analysis of volcanic activity.
Liu, Min-Yin; Huang, Adam; Huang, Norden E.
2017-01-01
Sleep spindles are brief bursts of brain activity in the sigma frequency range (11–16 Hz) measured by electroencephalography (EEG) mostly during non-rapid eye movement (NREM) stage 2 sleep. These oscillations are of great biological and clinical interests because they potentially play an important role in identifying and characterizing the processes of various neurological disorders. Conventionally, sleep spindles are identified by expert sleep clinicians via visual inspection of EEG signals. The process is laborious and the results are inconsistent among different experts. To resolve the problem, numerous computerized methods have been developed to automate the process of sleep spindle identification. Still, the performance of these automated sleep spindle detection methods varies inconsistently from study to study. There are two reasons: (1) the lack of common benchmark databases, and (2) the lack of commonly accepted evaluation metrics. In this study, we focus on tackling the second problem by proposing to evaluate the performance of a spindle detector in a multi-objective optimization context and hypothesize that using the resultant Pareto fronts for deriving evaluation metrics will improve automatic sleep spindle detection. We use a popular multi-objective evolutionary algorithm (MOEA), the Strength Pareto Evolutionary Algorithm (SPEA2), to optimize six existing frequency-based sleep spindle detection algorithms. They include three Fourier, one continuous wavelet transform (CWT), and two Hilbert-Huang transform (HHT) based algorithms. We also explore three hybrid approaches. Trained and tested on open-access DREAMS and MASS databases, two new hybrid methods of combining Fourier with HHT algorithms show significant performance improvement with F1-scores of 0.726–0.737. PMID:28572762
Ramsperger, Robert; Meckler, Stefan; Heger, Tanja; van Uem, Janet; Hucker, Svenja; Braatz, Ulrike; Graessner, Holm; Berg, Daniela; Manoli, Yiannos; Serrano, J Artur; Ferreira, Joaquim J; Hobert, Markus A; Maetzler, Walter
2016-05-01
Dyskinesias in Parkinson's disease (PD) patients are a common side effect of long-term dopaminergic therapy and are associated with motor dysfunctions, including gait and balance deficits. Although promising compounds have been developed to treat these symptoms, clinical trials have failed. This failure may, at least partly, be explained by the lack of objective and continuous assessment strategies. This study tested the clinical validity and ecological effect of an algorithm that detects and quantifies dyskinesias of the legs using a single ankle-worn sensor. Twenty-three PD patients (seven with leg dyskinesias) and 13 control subjects were investigated in the lab. Participants performed purposeful daily activity-like tasks while being video-taped. Clinical evaluation was performed using the leg dyskinesia item of the Unified Dyskinesia Rating Scale. The ecological effect of the developed algorithm was investigated in a multi-center, 12-week, home-based sub-study that included three patients with and seven without dyskinesias. In the lab-based sub-study, the sensor-based algorithm exhibited a specificity of 98%, a sensitivity of 85%, and an accuracy of 0.96 for the detection of dyskinesias and a correlation level of 0.61 (p < 0.001) with the clinical severity score. In the home-based sub-study, all patients could be correctly classified regarding the presence or absence of leg dyskinesias, supporting the ecological relevance of the algorithm. This study provides evidence of clinical validity and ecological effect of an algorithm derived from a single sensor on the ankle for detecting leg dyskinesias in PD patients. These results should motivate the investigation of leg dyskinesias in larger studies using wearable sensors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Detection of illicit substances in fingerprints by infrared spectral imaging.
Ng, Ping Hei Ronnie; Walker, Sarah; Tahtouh, Mark; Reedy, Brian
2009-08-01
FTIR and Raman spectral imaging can be used to simultaneously image a latent fingerprint and detect exogenous substances deposited within it. These substances might include drugs of abuse or traces of explosives or gunshot residue. In this work, spectral searching algorithms were tested for their efficacy in finding targeted substances deposited within fingerprints. "Reverse" library searching, where a large number of possibly poor-quality spectra from a spectral image are searched against a small number of high-quality reference spectra, poses problems for common search algorithms as they are usually implemented. Out of a range of algorithms which included conventional Euclidean distance searching, the spectral angle mapper (SAM) and correlation algorithms gave the best results when used with second-derivative image and reference spectra. All methods tested gave poorer performances with first derivative and undifferentiated spectra. In a search against a caffeine reference, the SAM and correlation methods were able to correctly rank a set of 40 confirmed but poor-quality caffeine spectra at the top of a dataset which also contained 4,096 spectra from an image of an uncontaminated latent fingerprint. These methods also successfully and individually detected aspirin, diazepam and caffeine that had been deposited together in another fingerprint, and they did not indicate any of these substances as a match in a search for another substance which was known not to be present. The SAM was used to successfully locate explosive components in fingerprints deposited on silicon windows. The potential of other spectral searching algorithms used in the field of remote sensing is considered, and the applicability of the methods tested in this work to other modes of spectral imaging is discussed.
Social Circles Detection from Ego Network and Profile Information
2014-12-19
response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing... algorithm used to infer k-clique communities is expo- nential, which makes this technique unfeasible when treating egonets with a large number of users...atic when considering RBMs. This inconvenient was positively solved implementing a sparsity treatment with the RBM algorithm . (ii) The ground truth was
False alarm reduction by the And-ing of multiple multivariate Gaussian classifiers
NASA Astrophysics Data System (ADS)
Dobeck, Gerald J.; Cobb, J. Tory
2003-09-01
The high-resolution sonar is one of the principal sensors used by the Navy to detect and classify sea mines in minehunting operations. For such sonar systems, substantial effort has been devoted to the development of automated detection and classification (D/C) algorithms. These have been spurred by several factors including (1) aids for operators to reduce work overload, (2) more optimal use of all available data, and (3) the introduction of unmanned minehunting systems. The environments where sea mines are typically laid (harbor areas, shipping lanes, and the littorals) give rise to many false alarms caused by natural, biologic, and man-made clutter. The objective of the automated D/C algorithms is to eliminate most of these false alarms while still maintaining a very high probability of mine detection and classification (PdPc). In recent years, the benefits of fusing the outputs of multiple D/C algorithms have been studied. We refer to this as Algorithm Fusion. The results have been remarkable, including reliable robustness to new environments. This paper describes a method for training several multivariate Gaussian classifiers such that their And-ing dramatically reduces false alarms while maintaining a high probability of classification. This training approach is referred to as the Focused- Training method. This work extends our 2001-2002 work where the Focused-Training method was used with three other types of classifiers: the Attractor-based K-Nearest Neighbor Neural Network (a type of radial-basis, probabilistic neural network), the Optimal Discrimination Filter Classifier (based linear discrimination theory), and the Quadratic Penalty Function Support Vector Machine (QPFSVM). Although our experience has been gained in the area of sea mine detection and classification, the principles described herein are general and can be applied to a wide range of pattern recognition and automatic target recognition (ATR) problems.
Quantum machine learning for quantum anomaly detection
NASA Astrophysics Data System (ADS)
Liu, Nana; Rebentrost, Patrick
2018-04-01
Anomaly detection is used for identifying data that deviate from "normal" data patterns. Its usage on classical data finds diverse applications in many important areas such as finance, fraud detection, medical diagnoses, data cleaning, and surveillance. With the advent of quantum technologies, anomaly detection of quantum data, in the form of quantum states, may become an important component of quantum applications. Machine-learning algorithms are playing pivotal roles in anomaly detection using classical data. Two widely used algorithms are the kernel principal component analysis and the one-class support vector machine. We find corresponding quantum algorithms to detect anomalies in quantum states. We show that these two quantum algorithms can be performed using resources that are logarithmic in the dimensionality of quantum states. For pure quantum states, these resources can also be logarithmic in the number of quantum states used for training the machine-learning algorithm. This makes these algorithms potentially applicable to big quantum data applications.
A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony; Munoz, Cesar
2015-01-01
In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.
Health management system for rocket engines
NASA Technical Reports Server (NTRS)
Nemeth, Edward
1990-01-01
The functional framework of a failure detection algorithm for the Space Shuttle Main Engine (SSME) is developed. The basic algorithm is based only on existing SSME measurements. Supplemental measurements, expected to enhance failure detection effectiveness, are identified. To support the algorithm development, a figure of merit is defined to estimate the likelihood of SSME criticality 1 failure modes and the failure modes are ranked in order of likelihood of occurrence. Nine classes of failure detection strategies are evaluated and promising features are extracted as the basis for the failure detection algorithm. The failure detection algorithm provides early warning capabilities for a wide variety of SSME failure modes. Preliminary algorithm evaluation, using data from three SSME failures representing three different failure types, demonstrated indications of imminent catastrophic failure well in advance of redline cutoff in all three cases.
A bioinspired collision detection algorithm for VLSI implementation
NASA Astrophysics Data System (ADS)
Cuadri, J.; Linan, G.; Stafford, R.; Keil, M. S.; Roca, E.
2005-06-01
In this paper a bioinspired algorithm for collision detection is proposed, based on previous models of the locust (Locusta migratoria) visual system reported by F.C. Rind and her group, in the University of Newcastle-upon-Tyne. The algorithm is suitable for VLSI implementation in standard CMOS technologies as a system-on-chip for automotive applications. The working principle of the algorithm is to process a video stream that represents the current scenario, and to fire an alarm whenever an object approaches on a collision course. Moreover, it establishes a scale of warning states, from no danger to collision alarm, depending on the activity detected in the current scenario. In the worst case, the minimum time before collision at which the model fires the collision alarm is 40 msec (1 frame before, at 25 frames per second). Since the average time to successfully fire an airbag system is 2 msec, even in the worst case, this algorithm would be very helpful to more efficiently arm the airbag system, or even take some kind of collision avoidance countermeasures. Furthermore, two additional modules have been included: a "Topological Feature Estimator" and an "Attention Focusing Algorithm". The former takes into account the shape of the approaching object to decide whether it is a person, a road line or a car. This helps to take more adequate countermeasures and to filter false alarms. The latter centres the processing power into the most active zones of the input frame, thus saving memory and processing time resources.
Combing Visible and Infrared Spectral Tests for Dust Identification
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Levy, Robert; Kleidman, Richard; Remer, Lorraine; Mattoo, Shana
2016-01-01
The MODIS Dark Target aerosol algorithm over Ocean (DT-O) uses spectral reflectance in the visible, near-IR and SWIR wavelengths to determine aerosol optical depth (AOD) and Angstrom Exponent (AE). Even though DT-O does have "dust-like" models to choose from, dust is not identified a priori before inversion. The "dust-like" models are not true "dust models" as they are spherical and do not have enough absorption at short wavelengths, so retrieved AOD and AE for dusty regions tends to be biased. The inference of "dust" is based on postprocessing criteria for AOD and AE by users. Dust aerosol has known spectral signatures in the near-UV (Deep blue), visible, and thermal infrared (TIR) wavelength regions. Multiple dust detection algorithms have been developed over the years with varying detection capabilities. Here, we test a few of these dust detection algorithms, to determine whether they can be useful to help inform the choices made by the DT-O algorithm. We evaluate the following methods: The multichannel imager (MCI) algorithm uses spectral threshold tests in (0.47, 0.64, 0.86, 1.38, 2.26, 3.9, 11.0, 12.0 micrometer) channels and spatial uniformity test [Zhao et al., 2010]. The NOAA dust aerosol index (DAI) uses spectral contrast in the blue channels (412nm and 440nm) [Ciren and Kundragunta, 2014]. The MCI is already included as tests within the "Wisconsin" (MOD35) Cloud mask algorithm.
Automatic detection of multi-level acetowhite regions in RGB color images of the uterine cervix
NASA Astrophysics Data System (ADS)
Lange, Holger
2005-04-01
Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method used to detect cancer precursors and cancer of the uterine cervix, whereby a physician (colposcopist) visually inspects the metaplastic epithelium on the cervix for certain distinctly abnormal morphologic features. A contrast agent, a 3-5% acetic acid solution, is used, causing abnormal and metaplastic epithelia to turn white. The colposcopist considers diagnostic features such as the acetowhite, blood vessel structure, and lesion margin to derive a clinical diagnosis. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD, a complex image analysis system that at its core assesses the same visual features as used by colposcopists. The acetowhite feature has been identified as one of the most important individual predictors of lesion severity. Here, we present the details and preliminary results of a multi-level acetowhite region detection algorithm for RGB color images of the cervix, including the detection of the anatomic features: cervix, os and columnar region, which are used for the acetowhite region detection. The RGB images are assumed to be glare free, either obtained by cross-polarized image acquisition or glare removal pre-processing. The basic approach of the algorithm is to extract a feature image from the RGB image that provides a good acetowhite to cervix background ratio, to segment the feature image using novel pixel grouping and multi-stage region-growing algorithms that provide region segmentations with different levels of detail, to extract the acetowhite regions from the region segmentations using a novel region selection algorithm, and then finally to extract the multi-levels from the acetowhite regions using multiple thresholds. The performance of the algorithm is demonstrated using human subject data.
Multi-object Detection and Discrimination Algorithms
2015-03-26
with an algorithm similar to a depth-‐first search . This stage of the algorithm is O(CN). From...Multi-object Detection and Discrimination Algorithms This document contains an overview of research and work performed and published at the University...of Florida from October 1, 2009 to October 31, 2013 pertaining to proposal 57306CS: Multi-object Detection and Discrimination Algorithms
Laser-Induced Breakdown Spectroscopy: A Review of Applied Explosive Detection
2013-09-01
Based Techniques ..........................................................................................7 2.5 Ion Mobility and Mass Spectrometry...proximal trace detection. We show that the algorithms for material identification could be improved by including the critical signatures (e.g., C2...IMS), desorption electrospray ionization (DESI), laser electrospray mass spectrometry (LEMS), emerging efforts like antibody/antigen-based efforts
Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection
NASA Astrophysics Data System (ADS)
Amiri, Ali; Fathy, Mahmood
2010-12-01
This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.
Improved detection and false alarm rejection for chemical vapors using passive hyperspectral imaging
NASA Astrophysics Data System (ADS)
Marinelli, William J.; Miyashiro, Rex; Gittins, Christopher M.; Konno, Daisei; Chang, Shing; Farr, Matt; Perkins, Brad
2013-05-01
Two AIRIS sensors were tested at Dugway Proving Grounds against chemical agent vapor simulants. The primary objectives of the test were to: 1) assess performance of algorithm improvements designed to reduce false alarm rates with a special emphasis on solar effects, and 3) evaluate performance in target detection at 5 km. The tests included 66 total releases comprising alternating 120 kg glacial acetic acid (GAA) and 60 kg triethyl phosphate (TEP) events. The AIRIS sensors had common algorithms, detection thresholds, and sensor parameters. The sensors used the target set defined for the Joint Service Lightweight Chemical Agent Detector (JSLSCAD) with TEP substituted for GA and GAA substituted for VX. They were exercised at two sites located at either 3 km or 5 km from the release point. Data from the tests will be presented showing that: 1) excellent detection capability was obtained at both ranges with significantly shorter alarm times at 5 km, 2) inter-sensor comparison revealed very comparable performance, 3) false alarm rates < 1 incident per 10 hours running time over 143 hours of sensor operations were achieved, 4) algorithm improvements eliminated both solar and cloud false alarms. The algorithms enabling the improved false alarm rejection will be discussed. The sensor technology has recently been extended to address the problem of detection of liquid and solid chemical agents and toxic industrial chemical on surfaces. The phenomenology and applicability of passive infrared hyperspectral imaging to this problem will be discussed and demonstrated.
Automated aortic calcification detection in low-dose chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.
2014-03-01
The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.
NASA Astrophysics Data System (ADS)
Fotin, Sergei V.; Yin, Yin; Haldankar, Hrishikesh; Hoffmeister, Jeffrey W.; Periaswamy, Senthil
2016-03-01
Computer-aided detection (CAD) has been used in screening mammography for many years and is likely to be utilized for digital breast tomosynthesis (DBT). Higher detection performance is desirable as it may have an impact on radiologist's decisions and clinical outcomes. Recently the algorithms based on deep convolutional architectures have been shown to achieve state of the art performance in object classification and detection. Similarly, we trained a deep convolutional neural network directly on patches sampled from two-dimensional mammography and reconstructed DBT volumes and compared its performance to a conventional CAD algorithm that is based on computation and classification of hand-engineered features. The detection performance was evaluated on the independent test set of 344 DBT reconstructions (GE SenoClaire 3D, iterative reconstruction algorithm) containing 328 suspicious and 115 malignant soft tissue densities including masses and architectural distortions. Detection sensitivity was measured on a region of interest (ROI) basis at the rate of five detection marks per volume. Moving from conventional to deep learning approach resulted in increase of ROI sensitivity from 0:832 +/- 0:040 to 0:893 +/- 0:033 for suspicious ROIs; and from 0:852 +/- 0:065 to 0:930 +/- 0:046 for malignant ROIs. These results indicate the high utility of deep feature learning in the analysis of DBT data and high potential of the method for broader medical image analysis tasks.
Human body motion tracking based on quantum-inspired immune cloning algorithm
NASA Astrophysics Data System (ADS)
Han, Hong; Yue, Lichuan; Jiao, Licheng; Wu, Xing
2009-10-01
In a static monocular camera system, to gain a perfect 3D human body posture is a great challenge for Computer Vision technology now. This paper presented human postures recognition from video sequences using the Quantum-Inspired Immune Cloning Algorithm (QICA). The algorithm included three parts. Firstly, prior knowledge of human beings was used, the key joint points of human could be detected automatically from the human contours and skeletons which could be thinning from the contours; And due to the complexity of human movement, a forecasting mechanism of occlusion joint points was addressed to get optimum 2D key joint points of human body; And then pose estimation recovered by optimizing between the 2D projection of 3D human key joint points and 2D detection key joint points using QICA, which recovered the movement of human body perfectly, because this algorithm could acquire not only the global optimal solution, but the local optimal solution.
Applied algorithm in the liner inspection of solid rocket motors
NASA Astrophysics Data System (ADS)
Hoffmann, Luiz Felipe Simões; Bizarria, Francisco Carlos Parquet; Bizarria, José Walter Parquet
2018-03-01
In rocket motors, the bonding between the solid propellant and thermal insulation is accomplished by a thin adhesive layer, known as liner. The liner application method involves a complex sequence of tasks, which includes in its final stage, the surface integrity inspection. Nowadays in Brazil, an expert carries out a thorough visual inspection to detect defects on the liner surface that may compromise the propellant interface bonding. Therefore, this paper proposes an algorithm that uses the photometric stereo technique and the K-nearest neighbor (KNN) classifier to assist the expert in the surface inspection. Photometric stereo allows the surface information recovery of the test images, while the KNN method enables image pixels classification into two classes: non-defect and defect. Tests performed on a computer vision based prototype validate the algorithm. The positive results suggest that the algorithm is feasible and when implemented in a real scenario, will be able to help the expert in detecting defective areas on the liner surface.
NASA Astrophysics Data System (ADS)
Zhang, Yu; Li, Fei; Zhang, Shengkai; Zhu, Tingting
2017-04-01
Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a pixel spacing of 40 meters near Prydz Bay area, East Antarctica. Main work is listed as follows: 1) A mixture statistical distribution based CRF algorithm has been developed for leads detection from Sentinel-1A dual polarization images. 2) The assessment of the proposed mixture statistical distribution based CRF method and single distribution based CRF algorithm has been presented. 3) The preferable parameters sets including statistical distributions, the aspect ratio threshold and spatial smoothing window size have been provided. In the future, the proposed algorithm will be developed for the operational Sentinel series data sets processing due to its less time consuming cost and high accuracy in leads detection.
Detecting Pulsing Denial-of-Service Attacks with Nondeterministic Attack Intervals
NASA Astrophysics Data System (ADS)
Luo, Xiapu; Chan, Edmond W. W.; Chang, Rocky K. C.
2009-12-01
This paper addresses the important problem of detecting pulsing denial of service (PDoS) attacks which send a sequence of attack pulses to reduce TCP throughput. Unlike previous works which focused on a restricted form of attacks, we consider a very broad class of attacks. In particular, our attack model admits any attack interval between two adjacent pulses, whether deterministic or not. It also includes the traditional flooding-based attacks as a limiting case (i.e., zero attack interval). Our main contribution is Vanguard, a new anomaly-based detection scheme for this class of PDoS attacks. The Vanguard detection is based on three traffic anomalies induced by the attacks, and it detects them using a CUSUM algorithm. We have prototyped Vanguard and evaluated it on a testbed. The experiment results show that Vanguard is more effective than the previous methods that are based on other traffic anomalies (after a transformation using wavelet transform, Fourier transform, and autocorrelation) and detection algorithms (e.g., dynamic time warping).
Trace gas detection in hyperspectral imagery using the wavelet packet subspace
NASA Astrophysics Data System (ADS)
Salvador, Mark A. Z.
This dissertation describes research into a new remote sensing method to detect trace gases in hyperspectral and ultra-spectral data. This new method is based on the wavelet packet transform. It attempts to improve both the computational tractability and the detection of trace gases in airborne and spaceborne spectral imagery. Atmospheric trace gas research supports various Earth science disciplines to include climatology, vulcanology, pollution monitoring, natural disasters, and intelligence and military applications. Hyperspectral and ultra-spectral data significantly increases the data glut of existing Earth science data sets. Spaceborne spectral data in particular significantly increases spectral resolution while performing daily global collections of the earth. Application of the wavelet packet transform to the spectral space of hyperspectral and ultra-spectral imagery data potentially improves remote sensing detection algorithms. It also facilities the parallelization of these methods for high performance computing. This research seeks two science goals, (1) developing a new spectral imagery detection algorithm, and (2) facilitating the parallelization of trace gas detection in spectral imagery data.
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Fast and accurate image recognition algorithms for fresh produce food safety sensing
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.
2011-06-01
This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.
An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using amore » combinatorial algorithm.« less
Luo, Wei; Davis, Geoff; Li, LiXia; Shriver, M Kathleen; Mei, Joanne; Styer, Linda M; Parker, Monica M; Smith, Amanda; Paz-Bailey, Gabriela; Ethridge, Steve; Wesolowski, Laura; Owen, S Michele; Masciotra, Silvina
2017-06-01
FDA-approved antigen/antibody combo and HIV-1/2 differentiation supplemental tests do not have claims for dried blood spot (DBS) use. We compared two DBS-modified protocols, the Bio-Rad GS HIV Combo Ag/Ab (BRC) EIA and Geenius™ HIV-1/2 (Geenius) Supplemental Assay, to plasma protocols and evaluated them in the CDC/APHL HIV diagnostic algorithm. BRC-DBS p24 analytical sensitivity was calculated from serial dilutions of p24. DBS specimens included 11 HIV-1 seroconverters, 151 HIV-1-positive individuals, including 20 on antiretroviral therapy, 31 HIV-2-positive and one HIV-1/HIV-2-positive individuals. BRC-reactive specimens were tested with Geenius using the same DBS eluate. Matched plasma specimens were tested with BRC, an IgG/IgM immunoassay and Geenius. DBS and plasma results were compared using the McNemar's test. A DBS-algorithm applied to 348 DBS from high-risk individuals who participated in surveillance was compared to HIV status based on local testing algorithms. BRC-DBS detects p24 at a concentration 18 times higher than in plasma. In seroconverters, BRC-DBS detected more infections than the IgG/IgM immunoassay in plasma (p=0.0133), but fewer infections than BRC-plasma (p=0.0133). In addition, the BRC/Geenius-plasma algorithm identified more HIV-1 infections than the BRC/Geenius-DBS algorithm (p=0.0455). The DBS protocols correctly identified HIV status for established HIV-1 infections, including those on therapy, HIV-2 infections, and surveillance specimens. The DBS protocols exhibited promising performance and allowed rapid supplemental testing. Although the DBS algorithm missed some early infections, it showed similar results when applied to specimens from a high-risk population. Implementation of a DBS algorithm would benefit testing programs without capacity for venipuncture. Published by Elsevier B.V.
Gas leak detection in infrared video with background modeling
NASA Astrophysics Data System (ADS)
Zeng, Xiaoxia; Huang, Likun
2018-03-01
Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Salamatova, T.; Zhukov, V.
2017-02-01
The paper presents the application of the artificial immune systems apparatus as a heuristic method of network intrusion detection for algorithmic provision of intrusion detection systems. The coevolutionary immune algorithm of artificial immune systems with clonal selection was elaborated. In testing different datasets the empirical results of evaluation of the algorithm effectiveness were achieved. To identify the degree of efficiency the algorithm was compared with analogs. The fundamental rules based of solutions generated by this algorithm are described in the article.
Early Results from the Global Precipitation Measurement (GPM) Mission in Japan
NASA Astrophysics Data System (ADS)
Kachi, Misako; Kubota, Takuji; Masaki, Takeshi; Kaneko, Yuki; Kanemaru, Kaya; Oki, Riko; Iguchi, Toshio; Nakamura, Kenji; Takayabu, Yukari N.
2015-04-01
The Global Precipitation Measurement (GPM) mission is an international collaboration to achieve highly accurate and highly frequent global precipitation observations. The GPM mission consists of the GPM Core Observatory jointly developed by U.S. and Japan and Constellation Satellites that carry microwave radiometers and provided by the GPM partner agencies. The Dual-frequency Precipitation Radar (DPR) was developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and installed on the GPM Core Observatory. The GPM Core Observatory chooses a non-sun-synchronous orbit to carry on diurnal cycle observations of rainfall from the Tropical Rainfall Measuring Mission (TRMM) satellite and was successfully launched at 3:37 a.m. on February 28, 2014 (JST), while the Constellation Satellites, including JAXA's Global Change Observation Mission (GCOM) - Water (GCOM-W1) or "SHIZUKU," are launched by each partner agency sometime around 2014 and contribute to expand observation coverage and increase observation frequency JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and DPR-GMI combined Level2 algorithms. JAXA also develops the Global Rainfall Map (GPM-GSMaP) algorithm, which is a latest version of the Global Satellite Mapping of Precipitation (GSMaP), as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. Major improvements in the GPM-GSMaP algorithm is; 1) improvements in microwave imager algorithm based on AMSR2 precipitation standard algorithm, including new land algorithm, new coast detection scheme; 2) Development of orographic rainfall correction method for warm rainfall in coastal area (Taniguchi et al., 2012); 3) Update of database, including rainfall detection over land and land surface emission database; 4) Development of microwave sounder algorithm over land (Kida et al., 2012); and 5) Development of gauge-calibrated GSMaP algorithm (Ushio et al., 2013). In addition to those improvements in the algorithms number of passive microwave imagers and/or sounders used in the GPM-GSMaP was increased compared to the previous version. After the early calibration and validation of the products and evaluation that all products achieved the release criteria, all GPM standard products and the GPM-GSMaP product has been released to the public since September 2014. The GPM products can be downloaded via the internet through the JAXA G-Portal (https://www.gportal.jaxa.jp).
The SARS algorithm: detrending CoRoT light curves with Sysrem using simultaneous external parameters
NASA Astrophysics Data System (ADS)
Ofir, Aviv; Alonso, Roi; Bonomo, Aldo Stefano; Carone, Ludmila; Carpano, Stefania; Samuel, Benjamin; Weingrill, Jörg; Aigrain, Suzanne; Auvergne, Michel; Baglin, Annie; Barge, Pierre; Borde, Pascal; Bouchy, Francois; Deeg, Hans J.; Deleuil, Magali; Dvorak, Rudolf; Erikson, Anders; Mello, Sylvio Ferraz; Fridlund, Malcolm; Gillon, Michel; Guillot, Tristan; Hatzes, Artie; Jorda, Laurent; Lammer, Helmut; Leger, Alain; Llebaria, Antoine; Moutou, Claire; Ollivier, Marc; Päetzold, Martin; Queloz, Didier; Rauer, Heike; Rouan, Daniel; Schneider, Jean; Wuchterl, Guenther
2010-05-01
Surveys for exoplanetary transits are usually limited not by photon noise but rather by the amount of red noise in their data. In particular, although the CoRoT space-based survey data are being carefully scrutinized, significant new sources of systematic noises are still being discovered. Recently, a magnitude-dependant systematic effect was discovered in the CoRoT data by Mazeh et al. and a phenomenological correction was proposed. Here we tie the observed effect to a particular type of effect, and in the process generalize the popular Sysrem algorithm to include external parameters in a simultaneous solution with the unknown effects. We show that a post-processing scheme based on this algorithm performs well and indeed allows for the detection of new transit-like signals that were not previously detected.
Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing
NASA Astrophysics Data System (ADS)
Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric
2017-07-01
Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, <0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, <1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (<1.01 m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE. Additionally, the high level of uncertainty occurs more on areas with high slope and high vegetation. This study provides an alternative and innovative approach for waveform processing that will benefit high fidelity processing of waveform LiDAR data to characterize vegetation structures.
Effect of segmentation algorithms on the performance of computerized detection of lung nodules in CT
Guo, Wei; Li, Qiang
2014-01-01
Purpose: The purpose of this study is to reveal how the performance of lung nodule segmentation algorithm impacts the performance of lung nodule detection, and to provide guidelines for choosing an appropriate segmentation algorithm with appropriate parameters in a computer-aided detection (CAD) scheme. Methods: The database consisted of 85 CT scans with 111 nodules of 3 mm or larger in diameter from the standard CT lung nodule database created by the Lung Image Database Consortium. The initial nodule candidates were identified as those with strong response to a selective nodule enhancement filter. A uniform viewpoint reformation technique was applied to a three-dimensional nodule candidate to generate 24 two-dimensional (2D) reformatted images, which would be used to effectively distinguish between true nodules and false positives. Six different algorithms were employed to segment the initial nodule candidates in the 2D reformatted images. Finally, 2D features from the segmented areas in the 24 reformatted images were determined, selected, and classified for removal of false positives. Therefore, there were six similar CAD schemes, in which only the segmentation algorithms were different. The six segmentation algorithms included the fixed thresholding (FT), Otsu thresholding (OTSU), fuzzy C-means (FCM), Gaussian mixture model (GMM), Chan and Vese model (CV), and local binary fitting (LBF). The mean Jaccard index and the mean absolute distance (Dmean) were employed to evaluate the performance of segmentation algorithms, and the number of false positives at a fixed sensitivity was employed to evaluate the performance of the CAD schemes. Results: For the segmentation algorithms of FT, OTSU, FCM, GMM, CV, and LBF, the highest mean Jaccard index between the segmented nodule and the ground truth were 0.601, 0.586, 0.588, 0.563, 0.543, and 0.553, respectively, and the corresponding Dmean were 1.74, 1.80, 2.32, 2.80, 3.48, and 3.18 pixels, respectively. With these segmentation results of the six segmentation algorithms, the six CAD schemes reported 4.4, 8.8, 3.4, 9.2, 13.6, and 10.4 false positives per CT scan at a sensitivity of 80%. Conclusions: When multiple algorithms are available for segmenting nodule candidates in a CAD scheme, the “optimal” segmentation algorithm did not necessarily lead to the “optimal” CAD detection performance. PMID:25186393
Computerized classification of proximal occlusion in the left anterior descending coronary artery.
Gregg, Richard E; Nikus, Kjell C; Zhou, Sophia H; Startt Selvester, Ronald H; Barbara, Victoria
2010-01-01
Proximal occlusion within the left anterior descending (LAD) coronary artery in patients with acute myocardial infarction leads to higher mortality than does nonproximal occlusion. We evaluated an automated program to detect proximal LAD occlusion. All patients with suspected acute coronary syndrome (n = 7,710) presenting consecutively to the emergency department of a local hospital with a coronary angiogram–confirmed flow-limiting lesion and notation of occlusion site were included in the study (n = 711). Electrocardiograms (ECGs) that met ST-segment elevation myocardial infarction (STEMI) criteria were included in the training set (n = 183). Paired angiographic location of proximal LAD and ECGs with ST elevation in the anterolateral region were used for the computer program development (n = 36). The test set was based on ECG criteria for anterolateral STEMI only without angiographic reports (n = 162). Tested against 2 expert cardiologists' agreed reading of proximal LAD occlusion, the algorithm has a sensitivity of 95% and a specificity of 82%. The algorithm is designed to have high sensitivity rather than high specificity for the purpose of not missing any proximal LAD in the STEMI population. Our preliminary evaluation suggests that the algorithm can detect proximal LAD occlusion as an additional interpretation to STEMI detection with similar accuracy as cardiologist readers.
NASA Astrophysics Data System (ADS)
Schlueter, Kristy; Dabiri, John
2016-11-01
Coherent structure identification is important in many fluid dynamics applications, including transport phenomena in ocean flows and mixing and diffusion in turbulence. However, many of the techniques currently available for measuring such flows, including ocean drifter datasets and particle tracking velocimetry, only result in sparse velocity data. This is often insufficient for the use of current coherent structure detection algorithms based on analysis of the deformation gradient. Here, we present a frame-invariant method for detecting coherent structures from Lagrangian flow trajectories that can be sparse in number. The method, based on principles used in graph coloring algorithms, examines a measure of the kinematic dissimilarity of all pairs of flow trajectories, either measured experimentally, e.g. using particle tracking velocimetry; or numerically, by advecting fluid particles in the Eulerian velocity field. Coherence is assigned to groups of particles whose kinematics remain similar throughout the time interval for which trajectory data is available, regardless of their physical proximity to one another. Through the use of several analytical and experimental validation cases, this algorithm is shown to robustly detect coherent structures using significantly less flow data than is required by existing methods. This research was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.
Multiagency Urban Search Experiment Detector and Algorithm Test Bed
NASA Astrophysics Data System (ADS)
Nicholson, Andrew D.; Garishvili, Irakli; Peplow, Douglas E.; Archer, Daniel E.; Ray, William R.; Swinney, Mathew W.; Willis, Michael J.; Davidson, Gregory G.; Cleveland, Steven L.; Patton, Bruce W.; Hornback, Donald E.; Peltz, James J.; McLean, M. S. Lance; Plionis, Alexander A.; Quiter, Brian J.; Bandstra, Mark S.
2017-07-01
In order to provide benchmark data sets for radiation detector and algorithm development, a particle transport test bed has been created using experimental data as model input and validation. A detailed radiation measurement campaign at the Combined Arms Collective Training Facility in Fort Indiantown Gap, PA (FTIG), USA, provides sample background radiation levels for a variety of materials present at the site (including cinder block, gravel, asphalt, and soil) using long dwell high-purity germanium (HPGe) measurements. In addition, detailed light detection and ranging data and ground-truth measurements inform model geometry. This paper describes the collected data and the application of these data to create background and injected source synthetic data for an arbitrary gamma-ray detection system using particle transport model detector response calculations and statistical sampling. In the methodology presented here, HPGe measurements inform model source terms while detector response calculations are validated via long dwell measurements using 2"×4"×16" NaI(Tl) detectors at a variety of measurement points. A collection of responses, along with sampling methods and interpolation, can be used to create data sets to gauge radiation detector and algorithm (including detection, identification, and localization) performance under a variety of scenarios. Data collected at the FTIG site are available for query, filtering, visualization, and download at muse.lbl.gov.
Automatic red eye correction and its quality metric
NASA Astrophysics Data System (ADS)
Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho
2008-01-01
The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.
NASA Astrophysics Data System (ADS)
Trepte, Q.; Minnis, P.; Palikonda, R.; Yost, C. R.; Rodier, S. D.; Trepte, C. R.; McGill, M. J.
2016-12-01
Geostationary satellites provide continuous cloud and meteorological observations important for weather forecasting and for understanding climate processes. The Himawari-8 satellite represents a new generation of measurement capabilities with significantly improved resolution and enhanced spectral information. The satellite was launched in October 2014 by the Japanese Meteorological Agency and is centered at 140° E to provide coverage over eastern Asia and the western Pacific region. A cloud detection algorithm was developed as part of the CERES Cloud Mask algorithm using the Advanced Himawari Imager (AHI), a 16 channel multi-spectral imager. The algorithm was originally designed for use with Meteosat Second Generation (MSG) data and has been adapted for Himawari-8 AHI measurements. This paper will describe the improvements in the Himawari cloud mask including daytime ocean low cloud and aerosol discrimination, nighttime thin cirrus detection, and Australian desert and coastal cloud detection. The statistics from matched CERES Himawari cloud mask results with CALIPSO lidar data and with new observations from the CATS lidar will also be presented. A feature of the CATS instrument on board the International Space Station is that it gives information at different solar viewing times to examine the diurnal variation of clouds and this provides an ability to evaluate the performance of the cloud mask for different sun angles.
Algorithm and assessment work of active fire detection based on FengYun-3C/VIRR
NASA Astrophysics Data System (ADS)
Lin, Z.; Chen, F.
2017-12-01
The wildfire is one of the most destructive and uncontrollable disasters and causes huge environmental, ecological, social effects. To better serve scientific research and practical fire management, an algorithm and corresponding validation work of active fire detection based on FengYun-3C/VIRR data, which is an optical sensor onboard the Chinese polar-orbiting meteorological sun-synchronous satellite, is hereby introduced. While the main structure heritages the `contextual algorithm', some new concepts including `infrared channel slope' are introduced for better adaptions to different situations. The validation work contains three parts: 1) comparing with the current FengYun-3C fire product GFR; 2) comparing with MODIS fire products; 3) comparing with Landsat series data. Study areas are selected from different places all over the world from 2014 to 2016. The results showed great improvement on GFR files on accuracy of both positioning and detection rate. In most study areas, the results match well with MODIS products and Landsat series data (with over 85% match degree) despite the differences in imaging time. However, detection rates and match degrees in Africa and South-east Asia are not satisfied (around 70%), where the occurrences of numerous small fire events and corresponding smokes may strongly affect the results of the algorithm. This is our future research direction and one of the main improvements requires achieving.
Infrared dim and small target detecting and tracking method inspired by Human Visual System
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian
2014-01-01
Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.
Soft-Fault Detection Technologies Developed for Electrical Power Systems
NASA Technical Reports Server (NTRS)
Button, Robert M.
2004-01-01
The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.
Railway obstacle detection algorithm using neural network
NASA Astrophysics Data System (ADS)
Yu, Mingyang; Yang, Peng; Wei, Sen
2018-05-01
Aiming at the difficulty of detection of obstacle in outdoor railway scene, a data-oriented method based on neural network to obtain image objects is proposed. First, we mark objects of images(such as people, trains, animals) acquired on the Internet. and then use the residual learning units to build Fast R-CNN framework. Then, the neural network is trained to get the target image characteristics by using stochastic gradient descent algorithm. Finally, a well-trained model is used to identify an outdoor railway image. if it includes trains and other objects, it will issue an alert. Experiments show that the correct rate of warning reached 94.85%.
NASA Astrophysics Data System (ADS)
Weber, Bruce A.
2005-07-01
We have performed an experiment that compares the performance of human observers with that of a robust algorithm for the detection of targets in difficult, nonurban forward-looking infrared imagery. Our purpose was to benchmark the comparison and document performance differences for future algorithm improvement. The scale-insensitive detection algorithm, used as a benchmark by the Night Vision Electronic Sensors Directorate for algorithm evaluation, employed a combination of contrastlike features to locate targets. Detection receiver operating characteristic curves and observer-confidence analyses were used to compare human and algorithmic responses and to gain insight into differences. The test database contained ground targets, in natural clutter, whose detectability, as judged by human observers, ranged from easy to very difficult. In general, as compared with human observers, the algorithm detected most of the same targets, but correlated confidence with correct detections poorly and produced many more false alarms at any useful level of performance. Though characterizing human performance was not the intent of this study, results suggest that previous observational experience was not a strong predictor of human performance, and that combining individual human observations by majority vote significantly reduced false-alarm rates.
Carvalho, Gustavo A; Minnett, Peter J; Fleming, Lora E; Banzon, Viva F; Baringer, Warner
2010-06-01
In a continuing effort to develop suitable methods for the surveillance of Harmful Algal Blooms (HABs) of Karenia brevis using satellite radiometers, a new multi-algorithm method was developed to explore whether improvements in the remote sensing detection of the Florida Red Tide was possible. A Hybrid Scheme was introduced that sequentially applies the optimized versions of two pre-existing satellite-based algorithms: an Empirical Approach (using water-leaving radiance as a function of chlorophyll concentration) and a Bio-optical Technique (using particulate backscatter along with chlorophyll concentration). The long-term evaluation of the new multi-algorithm method was performed using a multi-year MODIS dataset (2002 to 2006; during the boreal Summer-Fall periods - July to December) along the Central West Florida Shelf between 25.75°N and 28.25°N. Algorithm validation was done with in situ measurements of the abundances of K. brevis; cell counts ≥1.5×10(4) cells l(-1) defined a detectable HAB. Encouraging statistical results were derived when either or both algorithms correctly flagged known samples. The majority of the valid match-ups were correctly identified (~80% of both HABs and non-blooming conditions) and few false negatives or false positives were produced (~20% of each). Additionally, most of the HAB-positive identifications in the satellite data were indeed HAB samples (positive predictive value: ~70%) and those classified as HAB-negative were almost all non-bloom cases (negative predictive value: ~86%). These results demonstrate an excellent detection capability, on average ~10% more accurate than the individual algorithms used separately. Thus, the new Hybrid Scheme could become a powerful tool for environmental monitoring of K. brevis blooms, with valuable consequences including leading to the more rapid and efficient use of ships to make in situ measurements of HABs.
Carvalho, Gustavo A.; Minnett, Peter J.; Fleming, Lora E.; Banzon, Viva F.; Baringer, Warner
2010-01-01
In a continuing effort to develop suitable methods for the surveillance of Harmful Algal Blooms (HABs) of Karenia brevis using satellite radiometers, a new multi-algorithm method was developed to explore whether improvements in the remote sensing detection of the Florida Red Tide was possible. A Hybrid Scheme was introduced that sequentially applies the optimized versions of two pre-existing satellite-based algorithms: an Empirical Approach (using water-leaving radiance as a function of chlorophyll concentration) and a Bio-optical Technique (using particulate backscatter along with chlorophyll concentration). The long-term evaluation of the new multi-algorithm method was performed using a multi-year MODIS dataset (2002 to 2006; during the boreal Summer-Fall periods – July to December) along the Central West Florida Shelf between 25.75°N and 28.25°N. Algorithm validation was done with in situ measurements of the abundances of K. brevis; cell counts ≥1.5×104 cells l−1 defined a detectable HAB. Encouraging statistical results were derived when either or both algorithms correctly flagged known samples. The majority of the valid match-ups were correctly identified (~80% of both HABs and non-blooming conditions) and few false negatives or false positives were produced (~20% of each). Additionally, most of the HAB-positive identifications in the satellite data were indeed HAB samples (positive predictive value: ~70%) and those classified as HAB-negative were almost all non-bloom cases (negative predictive value: ~86%). These results demonstrate an excellent detection capability, on average ~10% more accurate than the individual algorithms used separately. Thus, the new Hybrid Scheme could become a powerful tool for environmental monitoring of K. brevis blooms, with valuable consequences including leading to the more rapid and efficient use of ships to make in situ measurements of HABs. PMID:21037979
SA-SOM algorithm for detecting communities in complex networks
NASA Astrophysics Data System (ADS)
Chen, Luogeng; Wang, Yanran; Huang, Xiaoming; Hu, Mengyu; Hu, Fang
2017-10-01
Currently, community detection is a hot topic. This paper, based on the self-organizing map (SOM) algorithm, introduced the idea of self-adaptation (SA) that the number of communities can be identified automatically, a novel algorithm SA-SOM of detecting communities in complex networks is proposed. Several representative real-world networks and a set of computer-generated networks by LFR-benchmark are utilized to verify the accuracy and the efficiency of this algorithm. The experimental findings demonstrate that this algorithm can identify the communities automatically, accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity, NMI and density than the SOM algorithm does.
A novel adaptive, real-time algorithm to detect gait events from wearable sensors.
Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona
2015-05-01
A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices.
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
Fuzzy automata and pattern matching
NASA Technical Reports Server (NTRS)
Setzer, C. B.; Warsi, N. A.
1986-01-01
A wide-ranging search for articles and books concerned with fuzzy automata and syntactic pattern recognition is presented. A number of survey articles on image processing and feature detection were included. Hough's algorithm is presented to illustrate the way in which knowledge about an image can be used to interpret the details of the image. It was found that in hand generated pictures, the algorithm worked well on following the straight lines, but had great difficulty turning corners. An algorithm was developed which produces a minimal finite automaton recognizing a given finite set of strings. One difficulty of the construction is that, in some cases, this minimal automaton is not unique for a given set of strings and a given maximum length. This algorithm compares favorably with other inference algorithms. More importantly, the algorithm produces an automaton with a rigorously described relationship to the original set of strings that does not depend on the algorithm itself.
AdaBoost-based algorithm for network intrusion detection.
Hu, Weiming; Hu, Wei; Maybank, Steve
2008-04-01
Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.
Corner detection and sorting method based on improved Harris algorithm in camera calibration
NASA Astrophysics Data System (ADS)
Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang
2016-11-01
In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.
QRS Detection Algorithm for Telehealth Electrocardiogram Recordings.
Khamis, Heba; Weiss, Robert; Xie, Yang; Chang, Chan-Wei; Lovell, Nigel H; Redmond, Stephen J
2016-07-01
QRS detection algorithms are needed to analyze electrocardiogram (ECG) recordings generated in telehealth environments. However, the numerous published QRS detectors focus on clean clinical data. Here, a "UNSW" QRS detection algorithm is described that is suitable for clinical ECG and also poorer quality telehealth ECG. The UNSW algorithm generates a feature signal containing information about ECG amplitude and derivative, which is filtered according to its frequency content and an adaptive threshold is applied. The algorithm was tested on clinical and telehealth ECG and the QRS detection performance is compared to the Pan-Tompkins (PT) and Gutiérrez-Rivas (GR) algorithm. For the MIT-BIH Arrhythmia database (virtually artifact free, clinical ECG), the overall sensitivity (Se) and positive predictivity (+P) of the UNSW algorithm was >99%, which was comparable to PT and GR. When applied to the MIT-BIH noise stress test database (clinical ECG with added calibrated noise) after artifact masking, all three algorithms had overall Se >99%, and the UNSW algorithm had higher +P (98%, p < 0.05) than PT and GR. For 250 telehealth ECG records (unsupervised recordings; dry metal electrodes), the UNSW algorithm had 98% Se and 95% +P which was superior to PT (+P: p < 0.001) and GR (Se and +P: p < 0.001). This is the first study to describe a QRS detection algorithm for telehealth data and evaluate it on clinical and telehealth ECG with superior results to published algorithms. The UNSW algorithm could be used to manage increasing telehealth ECG analysis workloads.
Peteye detection and correction
NASA Astrophysics Data System (ADS)
Yen, Jonathan; Luo, Huitao; Tretter, Daniel
2007-01-01
Redeyes are caused by the camera flash light reflecting off the retina. Peteyes refer to similar artifacts in the eyes of other mammals caused by camera flash. In this paper we present a peteye removal algorithm for detecting and correcting peteye artifacts in digital images. Peteye removal for animals is significantly more difficult than redeye removal for humans, because peteyes can be any of a variety of colors, and human face detection cannot be used to localize the animal eyes. In many animals, including dogs and cats, the retina has a special reflective layer that can cause a variety of peteye colors, depending on the animal's breed, age, or fur color, etc. This makes the peteye correction more challenging. We have developed a semi-automatic algorithm for peteye removal that can detect peteyes based on the cursor position provided by the user and correct them by neutralizing the colors with glare reduction and glint retention.
An Evaluation of Pixel-Based Methods for the Detection of Floating Objects on the Sea Surface
NASA Astrophysics Data System (ADS)
Borghgraef, Alexander; Barnich, Olivier; Lapierre, Fabian; Van Droogenbroeck, Marc; Philips, Wilfried; Acheroy, Marc
2010-12-01
Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging, search-and-rescue operation, perimeter, or harbour defense. Detection in infrared (IR) is challenging because a rough sea is seen as a dynamic background of moving objects with size order, shape, and temperature similar to those of the floating mine. In this paper we have applied a selection of background subtraction algorithms to the problem, and we show that the recent algorithms such as ViBe and behaviour subtraction, which take into account spatial and temporal correlations within the dynamic scene, significantly outperform the more conventional parametric techniques, with only little prior assumptions about the physical properties of the scene.
A new approach based on the median filter to T-wave detection in ECG signal.
Kholkhal, Mourad; Bereksi Reguig, Fethi
2014-07-01
The electrocardiogram (ECG) is one of the most used signals in the diagnosis of heart disease. It contains different waves which directly correlate to heart activity. Different methods have been used in order to detect these waves and consequently lead to heart activity diagnosis. This paper is interested more particularly to the detection of the T-wave. Such a wave represents the re-polarization state of the heart activity. The proposed approach is based on the algorithm procedure which allows the detection of the T-wave using a lot of filter including mean and median filter. The proposed algorithm is implemented and tested on a set of ECG recordings taken from, respectively, the European STT, MITBIH and MITBIH ST databases. The results are found to be very satisfactory in terms of sensitivity, predictivity and error compared to other works in the field.
Detection of disease outbreaks by the use of oral manifestations.
Torres-Urquidy, M H; Wallstrom, G; Schleyer, T K L
2009-01-01
Oral manifestations of diseases caused by bioterrorist agents could be a potential data source for biosurveillance. This study had the objectives of determining the oral manifestations of diseases caused by bioterrorist agents, measuring the prevalence of these manifestations in emergency department reports, and constructing and evaluating a detection algorithm based on them. We developed a software application to detect oral manifestations in free text and identified positive reports over three years of data. The normal frequency in reports for oral manifestations related to anthrax (including buccal ulcers-sore throat) was 7.46%. The frequency for tularemia was 6.91%. For botulism and smallpox, the frequencies were 0.55% and 0.23%. We simulated outbreaks for these bioterrorism diseases and evaluated the performance of our system. The detection algorithm performed better for smallpox and botulism than for anthrax and tularemia. We found that oral manifestations can be a valuable tool for biosurveillance.
Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.
Latha, Indu; Reichenbach, Stephen E; Tao, Qingping
2011-09-23
Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.
Christiansen, Peter; Nielsen, Lars N; Steen, Kim A; Jørgensen, Rasmus N; Karstoft, Henrik
2016-11-11
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).
Christiansen, Peter; Nielsen, Lars N.; Steen, Kim A.; Jørgensen, Rasmus N.; Karstoft, Henrik
2016-01-01
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit). PMID:27845717
Nemoto, Mitsutaka; Hayashi, Naoto; Hanaoka, Shouhei; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu
2017-10-01
We propose a generalized framework for developing computer-aided detection (CADe) systems whose characteristics depend only on those of the training dataset. The purpose of this study is to show the feasibility of the framework. Two different CADe systems were experimentally developed by a prototype of the framework, but with different training datasets. The CADe systems include four components; preprocessing, candidate area extraction, candidate detection, and candidate classification. Four pretrained algorithms with dedicated optimization/setting methods corresponding to the respective components were prepared in advance. The pretrained algorithms were sequentially trained in the order of processing of the components. In this study, two different datasets, brain MRA with cerebral aneurysms and chest CT with lung nodules, were collected to develop two different types of CADe systems in the framework. The performances of the developed CADe systems were evaluated by threefold cross-validation. The CADe systems for detecting cerebral aneurysms in brain MRAs and for detecting lung nodules in chest CTs were successfully developed using the respective datasets. The framework was shown to be feasible by the successful development of the two different types of CADe systems. The feasibility of this framework shows promise for a new paradigm in the development of CADe systems: development of CADe systems without any lesion specific algorithm designing.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
STREAMFINDER - I. A new algorithm for detecting stellar streams
NASA Astrophysics Data System (ADS)
Malhan, Khyati; Ibata, Rodrigo A.
2018-07-01
We have designed a powerful new algorithm to detect stellar streams in an automated and systematic way. The algorithm, which we call the STREAMFINDER, is well suited for finding dynamically cold and thin stream structures that may lie along any simple or complex orbits in Galactic stellar surveys containing any combination of positional and kinematic information. In the present contribution, we introduce the algorithm, lay out the ideas behind it, explain the methodology adopted to detect streams, and detail its workings by running it on a suite of simulations of mock Galactic survey data of similar quality to that expected from the European Space Agency/Gaia mission. We show that our algorithm is able to detect even ultra-faint stream features lying well below previous detection limits. Tests show that our algorithm will be able to detect distant halo stream structures >10° long containing as few as ˜15 members (ΣG ˜ 33.6 mag arcsec-2) in the Gaia data set.
Distributed learning automata-based algorithm for community detection in complex networks
NASA Astrophysics Data System (ADS)
Khomami, Mohammad Mehdi Daliri; Rezvanian, Alireza; Meybodi, Mohammad Reza
2016-03-01
Community structure is an important and universal topological property of many complex networks such as social and information networks. The detection of communities of a network is a significant technique for understanding the structure and function of networks. In this paper, we propose an algorithm based on distributed learning automata for community detection (DLACD) in complex networks. In the proposed algorithm, each vertex of network is equipped with a learning automation. According to the cooperation among network of learning automata and updating action probabilities of each automaton, the algorithm interactively tries to identify high-density local communities. The performance of the proposed algorithm is investigated through a number of simulations on popular synthetic and real networks. Experimental results in comparison with popular community detection algorithms such as walk trap, Danon greedy optimization, Fuzzy community detection, Multi-resolution community detection and label propagation demonstrated the superiority of DLACD in terms of modularity, NMI, performance, min-max-cut and coverage.
An Improved Harmonic Current Detection Method Based on Parallel Active Power Filter
NASA Astrophysics Data System (ADS)
Zeng, Zhiwu; Xie, Yunxiang; Wang, Yingpin; Guan, Yuanpeng; Li, Lanfang; Zhang, Xiaoyu
2017-05-01
Harmonic detection technology plays an important role in the applications of active power filter. The accuracy and real-time performance of harmonic detection are the precondition to ensure the compensation performance of Active Power Filter (APF). This paper proposed an improved instantaneous reactive power harmonic current detection algorithm. The algorithm uses an improved ip -iq algorithm which is combined with the moving average value filter. The proposed ip -iq algorithm can remove the αβ and dq coordinate transformation, decreasing the cost of calculation, simplifying the extraction process of fundamental components of load currents, and improving the detection speed. The traditional low-pass filter is replaced by the moving average filter, detecting the harmonic currents more precisely and quickly. Compared with the traditional algorithm, the THD (Total Harmonic Distortion) of the grid currents is reduced from 4.41% to 3.89% for the simulations and from 8.50% to 4.37% for the experiments after the improvement. The results show the proposed algorithm is more accurate and efficient.
Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin
2015-04-01
Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dudik, Joshua M.; Kurosu, Atsuko; Coyle, James L
2015-01-01
Background Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. Methods In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Results Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differen-tiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. Conclusions In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. PMID:25658505
A Multiscale pipeline for the search of string-induced CMB anisotropies
NASA Astrophysics Data System (ADS)
Vafaei Sadr, A.; Movahed, S. M. S.; Farhang, M.; Ringeval, C.; Bouchet, F. R.
2018-03-01
We propose a multiscale edge-detection algorithm to search for the Gott-Kaiser-Stebbins imprints of a cosmic string (CS) network on the cosmic microwave background (CMB) anisotropies. Curvelet decomposition and extended Canny algorithm are used to enhance the string detectability. Various statistical tools are then applied to quantify the deviation of CMB maps having a CS contribution with respect to pure Gaussian anisotropies of inflationary origin. These statistical measures include the one-point probability density function, the weighted two-point correlation function (TPCF) of the anisotropies, the unweighted TPCF of the peaks and of the up-crossing map, as well as their cross-correlation. We use this algorithm on a hundred of simulated Nambu-Goto CMB flat sky maps, covering approximately 10 per cent of the sky, and for different string tensions Gμ. On noiseless sky maps with an angular resolution of 0.9 arcmin, we show that our pipeline detects CSs with Gμ as low as Gμ ≳ 4.3 × 10-10. At the same resolution, but with a noise level typical to a CMB-S4 phase II experiment, the detection threshold would be to Gμ ≳ 1.2 × 10-7.
An Investigation of State-Space Model Fidelity for SSME Data
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2008-01-01
In previous studies, a variety of unsupervised anomaly detection techniques for anomaly detection were applied to SSME (Space Shuttle Main Engine) data. The observed results indicated that the identification of certain anomalies were specific to the algorithmic method under consideration. This is the reason why one of the follow-on goals of these previous investigations was to build an architecture to support the best capabilities of all algorithms. We appeal to that goal here by investigating a cascade, serial architecture for the best performing and most suitable candidates from previous studies. As a precursor to a formal ROC (Receiver Operating Characteristic) curve analysis for validation of resulting anomaly detection algorithms, our primary focus here is to investigate the model fidelity as measured by variants of the AIC (Akaike Information Criterion) for state-space based models. We show that placing constraints on a state-space model during or after the training of the model introduces a modest level of suboptimality. Furthermore, we compare the fidelity of all candidate models including those embodying the cascade, serial architecture. We make recommendations on the most suitable candidates for application to subsequent anomaly detection studies as measured by AIC-based criteria.
Development of a novel constellation based landmark detection algorithm
NASA Astrophysics Data System (ADS)
Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.
2013-03-01
Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.
Lee, Youngbum; Kim, Jinkwon; Son, Muntak; Lee, Myoungho
2007-01-01
This research implements wireless accelerometer sensor module and algorithm to determine wearer's posture, activity and fall. Wireless accelerometer sensor module uses ADXL202, 2-axis accelerometer sensor (Analog Device). And using wireless RF module, this module measures accelerometer signal and shows the signal at ;Acceloger' viewer program in PC. ADL algorithm determines posture, activity and fall that activity is determined by AC component of accelerometer signal and posture is determined by DC component of accelerometer signal. Those activity and posture include standing, sitting, lying, walking, running, etc. By the experiment for 30 subjects, the performance of implemented algorithm was assessed, and detection rate for postures, motions and subjects was calculated. Lastly, using wireless sensor network in experimental space, subject's postures, motions and fall monitoring system was implemented. By the simulation experiment for 30 subjects, 4 kinds of activity, 3 times, fall detection rate was calculated. In conclusion, this system can be application to patients and elders for activity monitoring and fall detection and also sports athletes' exercise measurement and pattern analysis. And it can be expected to common person's exercise training and just plaything for entertainment.
Thermal tracking in mobile robots for leak inspection activities.
Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki
2013-10-09
Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system.
Thermal Tracking in Mobile Robots for Leak Inspection Activities
Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki
2013-01-01
Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system. PMID:24113684
A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Becker, D.; Cain, S.
Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.
Commonality of drug-associated adverse events detected by 4 commonly used data mining algorithms.
Sakaeda, Toshiyuki; Kadoyama, Kaori; Minami, Keiko; Okuno, Yasushi
2014-01-01
Data mining algorithms have been developed for the quantitative detection of drug-associated adverse events (signals) from a large database on spontaneously reported adverse events. In the present study, the commonality of signals detected by 4 commonly used data mining algorithms was examined. A total of 2,231,029 reports were retrieved from the public release of the US Food and Drug Administration Adverse Event Reporting System database between 2004 and 2009. The deletion of duplicated submissions and revision of arbitrary drug names resulted in a reduction in the number of reports to 1,644,220. Associations with adverse events were analyzed for 16 unrelated drugs, using the proportional reporting ratio (PRR), reporting odds ratio (ROR), information component (IC), and empirical Bayes geometric mean (EBGM). All EBGM-based signals were included in the PRR-based signals as well as IC- or ROR-based ones, and PRR- and IC-based signals were included in ROR-based ones. The PRR scores of PRR-based signals were significantly larger for 15 of 16 drugs when adverse events were also detected as signals by the EBGM method, as were the IC scores of IC-based signals for all drugs; however, no such effect was observed in the ROR scores of ROR-based signals. The EBGM method was the most conservative among the 4 methods examined, which suggested its better suitability for pharmacoepidemiological studies. Further examinations should be performed on the reproducibility of clinical observations, especially for EBGM-based signals.
A Region Tracking-Based Vehicle Detection Algorithm in Nighttime Traffic Scenes
Wang, Jianqiang; Sun, Xiaoyan; Guo, Junbin
2013-01-01
The preceding vehicles detection technique in nighttime traffic scenes is an important part of the advanced driver assistance system (ADAS). This paper proposes a region tracking-based vehicle detection algorithm via the image processing technique. First, the brightness of the taillights during nighttime is used as the typical feature, and we use the existing global detection algorithm to detect and pair the taillights. When the vehicle is detected, a time series analysis model is introduced to predict vehicle positions and the possible region (PR) of the vehicle in the next frame. Then, the vehicle is only detected in the PR. This could reduce the detection time and avoid the false pairing between the bright spots in the PR and the bright spots out of the PR. Additionally, we present a thresholds updating method to make the thresholds adaptive. Finally, experimental studies are provided to demonstrate the application and substantiate the superiority of the proposed algorithm. The results show that the proposed algorithm can simultaneously reduce both the false negative detection rate and the false positive detection rate.
Expert system constant false alarm rate processor
NASA Astrophysics Data System (ADS)
Baldygo, William J., Jr.; Wicks, Michael C.
1993-10-01
The requirements for high detection probability and low false alarm probability in modern wide area surveillance radars are rarely met due to spatial variations in clutter characteristics. Many filtering and CFAR detection algorithms have been developed to effectively deal with these variations; however, any single algorithm is likely to exhibit excessive false alarms and intolerably low detection probabilities in a dynamically changing environment. A great deal of research has led to advances in the state of the art in Artificial Intelligence (AI) and numerous areas have been identified for application to radar signal processing. The approach suggested here, discussed in a patent application submitted by the authors, is to intelligently select the filtering and CFAR detection algorithms being executed at any given time, based upon the observed characteristics of the interference environment. This approach requires sensing the environment, employing the most suitable algorithms, and applying an appropriate multiple algorithm fusion scheme or consensus algorithm to produce a global detection decision.
Toward an Objective Enhanced-V Detection Algorithm
NASA Technical Reports Server (NTRS)
Moses, John F.; Brunner,Jason C.; Feltz, Wayne F.; Ackerman, Steven A.; Moses, John F.; Rabin, Robert M.
2007-01-01
The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V signature, has been observed to occur during and preceding severe weather. This study describes an algorithmic approach to objectively detect overshooting tops, temperature couplets, and enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of temperature, temperature difference, and distance thresholds for the overshooting top and temperature couplet detection parts of the algorithm and consists of cross correlation statistics of pixels for the enhanced-V detection part of the algorithm. The effectiveness of the overshooting top and temperature couplet detection components of the algorithm is examined using GOES and MODIS image data for case studies in the 2003-2006 seasons. The main goal is for the algorithm to be useful for operations with future sensors, such as GOES-R.
Evaluation of Anomaly Detection Capability for Ground-Based Pre-Launch Shuttle Operations. Chapter 8
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2010-01-01
This chapter will provide a thorough end-to-end description of the process for evaluation of three different data-driven algorithms for anomaly detection to select the best candidate for deployment as part of a suite of IVHM (Integrated Vehicle Health Management) technologies. These algorithms were deemed to be sufficiently mature enough to be considered viable candidates for deployment in support of the maiden launch of Ares I-X, the successor to the Space Shuttle for NASA's Constellation program. Data-driven algorithms are just one of three different types being deployed. The other two types of algorithms being deployed include a "nile-based" expert system, and a "model-based" system. Within these two categories, the deployable candidates have already been selected based upon qualitative factors such as flight heritage. For the rule-based system, SHINE (Spacecraft High-speed Inference Engine) has been selected for deployment, which is a component of BEAM (Beacon-based Exception Analysis for Multimissions), a patented technology developed at NASA's JPL (Jet Propulsion Laboratory) and serves to aid in the management and identification of operational modes. For the "model-based" system, a commercially available package developed by QSI (Qualtech Systems, Inc.), TEAMS (Testability Engineering and Maintenance System) has been selected for deployment to aid in diagnosis. In the context of this particular deployment, distinctions among the use of the terms "data-driven," "rule-based," and "model-based," can be found in. Although there are three different categories of algorithms that have been selected for deployment, our main focus in this chapter will be on the evaluation of three candidates for data-driven anomaly detection. These algorithms will be evaluated upon their capability for robustly detecting incipient faults or failures in the ground-based phase of pre-launch space shuttle operations, rather than based oil heritage as performed in previous studies. Robust detection will allow for the achievement of pre-specified minimum false alarm and/or missed detection rates in the selection of alert thresholds. All algorithms will also be optimized with respect to an aggregation of these same criteria. Our study relies upon the use of Shuttle data to act as was a proxy for and in preparation for application to Ares I-X data, which uses a very similar hardware platform for the subsystems that are being targeted (TVC - Thrust Vector Control subsystem for the SRB (Solid Rocket Booster)).
Talbot, Thomas R; Schaffner, William; Bloch, Karen C; Daniels, Titus L; Miller, Randolph A
2011-01-01
Objective The authors evaluated algorithms commonly used in syndromic surveillance for use as screening tools to detect potentially clonal outbreaks for review by infection control practitioners. Design Study phase 1 applied four aberrancy detection algorithms (CUSUM, EWMA, space-time scan statistic, and WSARE) to retrospective microbiologic culture data, producing a list of past candidate outbreak clusters. In phase 2, four infectious disease physicians categorized the phase 1 algorithm-identified clusters to ascertain algorithm performance. In phase 3, project members combined the algorithms to create a unified screening system and conducted a retrospective pilot evaluation. Measurements The study calculated recall and precision for each algorithm, and created precision-recall curves for various methods of combining the algorithms into a unified screening tool. Results Individual algorithm recall and precision ranged from 0.21 to 0.31 and from 0.053 to 0.29, respectively. Few candidate outbreak clusters were identified by more than one algorithm. The best method of combining the algorithms yielded an area under the precision-recall curve of 0.553. The phase 3 combined system detected all infection control-confirmed outbreaks during the retrospective evaluation period. Limitations Lack of phase 2 reviewers' agreement indicates that subjective expert review was an imperfect gold standard. Less conservative filtering of culture results and alternate parameter selection for each algorithm might have improved algorithm performance. Conclusion Hospital outbreak detection presents different challenges than traditional syndromic surveillance. Nevertheless, algorithms developed for syndromic surveillance have potential to form the basis of a combined system that might perform clinically useful hospital outbreak screening. PMID:21606134
Evaluation schemes for video and image anomaly detection algorithms
NASA Astrophysics Data System (ADS)
Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael
2016-05-01
Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.
Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus
2017-04-01
Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.
Application of the LDM algorithm to identify small lung nodules on low-dose MSCT scans
NASA Astrophysics Data System (ADS)
Zhao, Binsheng; Ginsberg, Michelle S.; Lefkowitz, Robert A.; Jiang, Li; Cooper, Cathleen; Schwartz, Lawrence H.
2004-05-01
In this work, we present a computer-aided detection (CAD) algorithm for small lung nodules on low-dose MSCT images. With this technique, identification of potential lung nodules is carried out with a local density maximum (LDM) algorithm, followed by reduction of false positives from the nodule candidates using task-specific 2-D/3-D features along with a knowledge-based nodule inclusion/exclusion strategy. Twenty-eight MSCT scans (40/80mAs, 120kVp, 5mm collimation/2.5mm reconstruction) from our lung cancer screening program that included at least one lung nodule were selected for this study. Two radiologists independently interpreted these cases. Subsequently, a consensus reading by both radiologists and CAD was generated to define a "gold standard". In total, 165 nodules were considered as the "gold standard" (average: 5.9 nodules/case; range: 1-22 nodules/case). The two radiologists detected 146 nodules (88.5%) and CAD detected 100 nodules (60.6%) with 8.7 false-positives/case. CAD detected an additional 19 nodules (6 nodules > 3mm and 13 nodules < 3mm) that had been missed by both radiologists. Preliminary results show that the CAD is capable of detecting small lung nodules with acceptable number of false-positives on low-dose MSCT scans and it can detect nodules that are otherwise missed by radiologists, though a majority are small nodules (< 3mm).
Bogaarts, J G; Hilkman, D M W; Gommer, E D; van Kranen-Mastenbroek, V H J M; Reulen, J P H
2016-12-01
Continuous electroencephalographic monitoring of critically ill patients is an established procedure in intensive care units. Seizure detection algorithms, such as support vector machines (SVM), play a prominent role in this procedure. To correct for inter-human differences in EEG characteristics, as well as for intra-human EEG variability over time, dynamic EEG feature normalization is essential. Recently, the median decaying memory (MDM) approach was determined to be the best method of normalization. MDM uses a sliding baseline buffer of EEG epochs to calculate feature normalization constants. However, while this method does include non-seizure EEG epochs, it also includes EEG activity that can have a detrimental effect on the normalization and subsequent seizure detection performance. In this study, EEG data that is to be incorporated into the baseline buffer are automatically selected based on a novelty detection algorithm (Novelty-MDM). Performance of an SVM-based seizure detection framework is evaluated in 17 long-term ICU registrations using the area under the sensitivity-specificity ROC curve. This evaluation compares three different EEG normalization methods, namely a fixed baseline buffer (FB), the median decaying memory (MDM) approach, and our novelty median decaying memory (Novelty-MDM) method. It is demonstrated that MDM did not improve overall performance compared to FB (p < 0.27), partly because seizure like episodes were included in the baseline. More importantly, Novelty-MDM significantly outperforms both FB (p = 0.015) and MDM (p = 0.0065).
NASA Technical Reports Server (NTRS)
Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Navas-Guzman, F.; Alados-Arboledas, L.
2012-01-01
This paper presents the development and set up of a cloud screening and data quality control algorithm for a star photometer based on CCD camera as detector. These algorithms are necessary for passive remote sensing techniques to retrieve the columnar aerosol optical depth, delta Ae(lambda), and precipitable water vapor content, W, at nighttime. This cloud screening procedure consists of calculating moving averages of delta Ae() and W under different time-windows combined with a procedure for detecting outliers. Additionally, to avoid undesirable Ae(lambda) and W fluctuations caused by the atmospheric turbulence, the data are averaged on 30 min. The algorithm is applied to the star photometer deployed in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) for the measurements acquired between March 2007 and September 2009. The algorithm is evaluated with correlative measurements registered by a lidar system and also with all-sky images obtained at the sunset and sunrise of the previous and following days. Promising results are obtained detecting cloud-affected data. Additionally, the cloud screening algorithm has been evaluated under different aerosol conditions including Saharan dust intrusion, biomass burning and pollution events.
Gilman, Robert H; Tielsch, James M; Steinhoff, Mark; Figueroa, Dante; Rodriguez, Shalim; Caffo, Brian; Tracey, Brian; Elhilali, Mounya; West, James; Checkley, William
2012-01-01
Introduction WHO case management algorithm for paediatric pneumonia relies solely on symptoms of shortness of breath or cough and tachypnoea for treatment and has poor diagnostic specificity, tends to increase antibiotic resistance. Alternatives, including oxygen saturation measurement, chest ultrasound and chest auscultation, exist but with potential disadvantages. Electronic auscultation has potential for improved detection of paediatric pneumonia but has yet to be standardised. The authors aim to investigate the use of electronic auscultation to improve the specificity of the current WHO algorithm in developing countries. Methods This study is designed to test the hypothesis that pulmonary pathology can be differentiated from normal using computerised lung sound analysis (CLSA). The authors will record lung sounds from 600 children aged ≤5 years, 100 each with consolidative pneumonia, diffuse interstitial pneumonia, asthma, bronchiolitis, upper respiratory infections and normal lungs at a children's hospital in Lima, Peru. The authors will compare CLSA with the WHO algorithm and other detection approaches, including physical exam findings, chest ultrasound and microbiologic testing to construct an improved algorithm for pneumonia diagnosis. Discussion This study will develop standardised methods for electronic auscultation and chest ultrasound and compare their utility for detection of pneumonia to standard approaches. Utilising signal processing techniques, the authors aim to characterise lung sounds and through machine learning, develop a classification system to distinguish pathologic sounds. Data will allow a better understanding of the benefits and limitations of novel diagnostic techniques in paediatric pneumonia. PMID:22307098
Advancement and results in hostile fire indication using potassium line missile warning sensors
NASA Astrophysics Data System (ADS)
Montgomery, Joel; Montgomery, Marjorie; Hardie, Russell
2014-06-01
M&M Aviation has been developing and conducting Hostile Fire Indication (HFI) tests using potassium line emission sensors for the Air Force Visible Missile Warning System (VMWS) to advance both algorithm and sensor technologies for UAV and other airborne systems for self protection and intelligence purposes. Work began in 2008 as an outgrowth of detecting and classifying false alarm sources for the VMWS using the same K-line spectral discrimination region but soon became a focus of research due to the high interest in both machine-gun fire and sniper geo-location via airborne systems. Several initial tests were accomplished in 2009 using small and medium caliber weapons including rifles. Based on these results, the Air Force Research Laboratory (AFRL) funded the Falcon Sentinel program in 2010 to provide for additional development of both the sensor concept, algorithm suite changes and verification of basic phenomenology including variance based on ammunition type for given weapons platform. Results from testing over the past 3 years have showed that the system would be able to detect and declare a sniper rifle at upwards of 3km, medium machine gun at 5km, and explosive events like hand-grenades at greater than 5km. This paper will outline the development of the sensor systems, algorithms used for detection and classification, and test results from VMWS prototypes as well as outline algorithms used for the VMWS. The Falcon Sentinel Program will be outlined and results shown. Finally, the paper will show the future work for ATD and transition efforts after the Falcon Sentinel program completed.
Revolution in nuclear detection affairs
NASA Astrophysics Data System (ADS)
Stern, Warren M.
2014-05-01
The detection of nuclear or radioactive materials for homeland or national security purposes is inherently difficult. This is one reason detection efforts must be seen as just one part of an overall nuclear defense strategy which includes, inter alia, material security, detection, interdiction, consequence management and recovery. Nevertheless, one could argue that there has been a revolution in detection affairs in the past several decades as the innovative application of new technology has changed the character and conduct of detection operations. This revolution will likely be most effectively reinforced in the coming decades with the networking of detectors and innovative application of anomaly detection algorithms.
NASA Technical Reports Server (NTRS)
Nechyba, Michael C.; Ettinger, Scott M.; Ifju, Peter G.; Wazak, Martin
2002-01-01
Recently substantial progress has been made towards design building and testifying remotely piloted Micro Air Vehicles (MAVs). This progress in overcoming the aerodynamic obstacles to flight at very small scales has, unfortunately, not been matched by similar progress in autonomous MAV flight. Thus, we propose a robust, vision-based horizon detection algorithm as the first step towards autonomous MAVs. In this paper, we first motivate the use of computer vision for the horizon detection task by examining the flight of birds (biological MAVs) and considering other practical factors. We then describe our vision-based horizon detection algorithm, which has been demonstrated at 30 Hz with over 99.9% correct horizon identification, over terrain that includes roads, buildings large and small, meadows, wooded areas, and a lake. We conclude with some sample horizon detection results and preview a companion paper, where the work discussed here forms the core of a complete autonomous flight stability system.
Stride search: A general algorithm for storm detection in high resolution climate data
Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; ...
2015-09-08
This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less
Barriga, E. Simon; Murray, Victor; Nemeth, Sheila; Crammer, Robert; Bauman, Wendall; Zamora, Gilberto; Pattichis, Marios S.; Soliz, Peter
2011-01-01
Purpose. To describe and evaluate the performance of an algorithm that automatically classifies images with pathologic features commonly found in diabetic retinopathy (DR) and age-related macular degeneration (AMD). Methods. Retinal digital photographs (N = 2247) of three fields of view (FOV) were obtained of the eyes of 822 patients at two centers: The Retina Institute of South Texas (RIST, San Antonio, TX) and The University of Texas Health Science Center San Antonio (UTHSCSA). Ground truth was provided for the presence of pathologic conditions, including microaneurysms, hemorrhages, exudates, neovascularization in the optic disc and elsewhere, drusen, abnormal pigmentation, and geographic atrophy. The algorithm was used to report on the presence or absence of disease. A detection threshold was applied to obtain different values of sensitivity and specificity with respect to ground truth and to construct a receiver operating characteristic (ROC) curve. Results. The system achieved an average area under the ROC curve (AUC) of 0.89 for detection of DR and of 0.92 for detection of sight-threatening DR (STDR). With a fixed specificity of 0.50, the system's sensitivity ranged from 0.92 for all DR cases to 1.00 for clinically significant macular edema (CSME). Conclusions. A computer-aided algorithm was trained to detect different types of pathologic retinal conditions. The cases of hard exudates within 1 disc diameter (DD) of the fovea (surrogate for CSME) were detected with very high accuracy (sensitivity = 1, specificity = 0.50), whereas mild nonproliferative DR was the most challenging condition (sensitivity= 0.92, specificity = 0.50). The algorithm was also tested on images with signs of AMD, achieving a performance of AUC of 0.84 (sensitivity = 0.94, specificity = 0.50). PMID:21666234
Agurto, Carla; Barriga, E Simon; Murray, Victor; Nemeth, Sheila; Crammer, Robert; Bauman, Wendall; Zamora, Gilberto; Pattichis, Marios S; Soliz, Peter
2011-07-29
To describe and evaluate the performance of an algorithm that automatically classifies images with pathologic features commonly found in diabetic retinopathy (DR) and age-related macular degeneration (AMD). Retinal digital photographs (N = 2247) of three fields of view (FOV) were obtained of the eyes of 822 patients at two centers: The Retina Institute of South Texas (RIST, San Antonio, TX) and The University of Texas Health Science Center San Antonio (UTHSCSA). Ground truth was provided for the presence of pathologic conditions, including microaneurysms, hemorrhages, exudates, neovascularization in the optic disc and elsewhere, drusen, abnormal pigmentation, and geographic atrophy. The algorithm was used to report on the presence or absence of disease. A detection threshold was applied to obtain different values of sensitivity and specificity with respect to ground truth and to construct a receiver operating characteristic (ROC) curve. The system achieved an average area under the ROC curve (AUC) of 0.89 for detection of DR and of 0.92 for detection of sight-threatening DR (STDR). With a fixed specificity of 0.50, the system's sensitivity ranged from 0.92 for all DR cases to 1.00 for clinically significant macular edema (CSME). A computer-aided algorithm was trained to detect different types of pathologic retinal conditions. The cases of hard exudates within 1 disc diameter (DD) of the fovea (surrogate for CSME) were detected with very high accuracy (sensitivity = 1, specificity = 0.50), whereas mild nonproliferative DR was the most challenging condition (sensitivity = 0.92, specificity = 0.50). The algorithm was also tested on images with signs of AMD, achieving a performance of AUC of 0.84 (sensitivity = 0.94, specificity = 0.50).
A scale-invariant keypoint detector in log-polar space
NASA Astrophysics Data System (ADS)
Tao, Tao; Zhang, Yun
2017-02-01
The scale-invariant feature transform (SIFT) algorithm is devised to detect keypoints via the difference of Gaussian (DoG) images. However, the DoG data lacks the high-frequency information, which can lead to a performance drop of the algorithm. To address this issue, this paper proposes a novel log-polar feature detector (LPFD) to detect scale-invariant blubs (keypoints) in log-polar space, which, in contrast, can retain all the image information. The algorithm consists of three components, viz. keypoint detection, descriptor extraction and descriptor matching. Besides, the algorithm is evaluated in detecting keypoints from the INRIA dataset by comparing with the SIFT algorithm and one of its fast versions, the speed up robust features (SURF) algorithm in terms of three performance measures, viz. correspondences, repeatability, correct matches and matching score.
CONEDEP: COnvolutional Neural network based Earthquake DEtection and Phase Picking
NASA Astrophysics Data System (ADS)
Zhou, Y.; Huang, Y.; Yue, H.; Zhou, S.; An, S.; Yun, N.
2017-12-01
We developed an automatic local earthquake detection and phase picking algorithm based on Fully Convolutional Neural network (FCN). The FCN algorithm detects and segments certain features (phases) in 3 component seismograms to realize efficient picking. We use STA/LTA algorithm and template matching algorithm to construct the training set from seismograms recorded 1 month before and after the Wenchuan earthquake. Precise P and S phases are identified and labeled to construct the training set. Noise data are produced by combining back-ground noise and artificial synthetic noise to form the equivalent scale of noise set as the signal set. Training is performed on GPUs to achieve efficient convergence. Our algorithm has significantly improved performance in terms of the detection rate and precision in comparison with STA/LTA and template matching algorithms.
Improvement and implementation for Canny edge detection algorithm
NASA Astrophysics Data System (ADS)
Yang, Tao; Qiu, Yue-hong
2015-07-01
Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.
Skeletonization with hollow detection on gray image by gray weighted distance transform
NASA Astrophysics Data System (ADS)
Bhattacharya, Prabir; Qian, Kai; Cao, Siqi; Qian, Yi
1998-10-01
A skeletonization algorithm that could be used to process non-uniformly distributed gray-scale images with hollows was presented. This algorithm is based on the Gray Weighted Distance Transformation. The process includes a preliminary phase of investigation in the hollows in the gray-scale image, whether these hollows are considered as topological constraints for the skeleton structure depending on their statistically significant depth. We then extract the resulting skeleton that has certain meaningful information for understanding the object in the image. This improved algorithm can overcome the possible misinterpretation of some complicated images in the extracted skeleton, especially in images with asymmetric hollows and asymmetric features. This algorithm can be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.
An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network
Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian
2015-01-01
Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish–Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection. PMID:26447696
An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network.
Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian
2015-01-01
Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish-Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection.
Automatic target detection using binary template matching
NASA Astrophysics Data System (ADS)
Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook
2005-03-01
This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.
Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform
NASA Astrophysics Data System (ADS)
Zheng, Yang; Chen, Xihao; Zhu, Rui
2017-07-01
Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.
The knowledge instinct, cognitive algorithms, modeling of language and cultural evolution
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.
2008-04-01
The talk discusses mechanisms of the mind and their engineering applications. The past attempts at designing "intelligent systems" encountered mathematical difficulties related to algorithmic complexity. The culprit turned out to be logic, which in one way or another was used not only in logic rule systems, but also in statistical, neural, and fuzzy systems. Algorithmic complexity is related to Godel's theory, a most fundamental mathematical result. These difficulties were overcome by replacing logic with a dynamic process "from vague to crisp," dynamic logic. It leads to algorithms overcoming combinatorial complexity, and resulting in orders of magnitude improvement in classical problems of detection, tracking, fusion, and prediction in noise. I present engineering applications to pattern recognition, detection, tracking, fusion, financial predictions, and Internet search engines. Mathematical and engineering efficiency of dynamic logic can also be understood as cognitive algorithm, which describes fundamental property of the mind, the knowledge instinct responsible for all our higher cognitive functions: concepts, perception, cognition, instincts, imaginations, intuitions, emotions, including emotions of the beautiful. I present our latest results in modeling evolution of languages and cultures, their interactions in these processes, and role of music in cultural evolution. Experimental data is presented that support the theory. Future directions are outlined.
Fernández, Roemi; Salinas, Carlota; Montes, Héctor; Sarria, Javier
2014-01-01
The motivation of this research was to explore the feasibility of detecting and locating fruits from different kinds of crops in natural scenarios. To this end, a unique, modular and easily adaptable multisensory system and a set of associated pre-processing algorithms are proposed. The offered multisensory rig combines a high resolution colour camera and a multispectral system for the detection of fruits, as well as for the discrimination of the different elements of the plants, and a Time-Of-Flight (TOF) camera that provides fast acquisition of distances enabling the localisation of the targets in the coordinate space. A controlled lighting system completes the set-up, increasing its flexibility for being used in different working conditions. The pre-processing algorithms designed for the proposed multisensory system include a pixel-based classification algorithm that labels areas of interest that belong to fruits and a registration algorithm that combines the results of the aforementioned classification algorithm with the data provided by the TOF camera for the 3D reconstruction of the desired regions. Several experimental tests have been carried out in outdoors conditions in order to validate the capabilities of the proposed system. PMID:25615730
An automatic system to detect and extract texts in medical images for de-identification
NASA Astrophysics Data System (ADS)
Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael
2010-03-01
Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.
The West Midlands breast cancer screening status algorithm - methodology and use as an audit tool.
Lawrence, Gill; Kearins, Olive; O'Sullivan, Emma; Tappenden, Nancy; Wallis, Matthew; Walton, Jackie
2005-01-01
To illustrate the ability of the West Midlands breast screening status algorithm to assign a screening status to women with malignant breast cancer, and its uses as a quality assurance and audit tool. Breast cancers diagnosed between the introduction of the National Health Service [NHS] Breast Screening Programme and 31 March 2001 were obtained from the West Midlands Cancer Intelligence Unit (WMCIU). Screen-detected tumours were identified via breast screening units, and the remaining cancers were assigned to one of eight screening status categories. Multiple primaries and recurrences were excluded. A screening status was assigned to 14,680 women (96% of the cohort examined), 110 cancers were not registered at the WMCIU and the cohort included 120 screen-detected recurrences. The West Midlands breast screening status algorithm is a robust simple tool which can be used to derive data to evaluate the efficacy and impact of the NHS Breast Screening Programme.
Multimodality imaging of ovarian cystic lesions: Review with an imaging based algorithmic approach
Wasnik, Ashish P; Menias, Christine O; Platt, Joel F; Lalchandani, Usha R; Bedi, Deepak G; Elsayes, Khaled M
2013-01-01
Ovarian cystic masses include a spectrum of benign, borderline and high grade malignant neoplasms. Imaging plays a crucial role in characterization and pretreatment planning of incidentally detected or suspected adnexal masses, as diagnosis of ovarian malignancy at an early stage is correlated with a better prognosis. Knowledge of differential diagnosis, imaging features, management trends and an algorithmic approach of such lesions is important for optimal clinical management. This article illustrates a multi-modality approach in the diagnosis of a spectrum of ovarian cystic masses and also proposes an algorithmic approach for the diagnosis of these lesions. PMID:23671748
Integrated segmentation of cellular structures
NASA Astrophysics Data System (ADS)
Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo
2011-03-01
Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.
NASA Astrophysics Data System (ADS)
Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.
2017-05-01
Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.
Automated detection of tuberculosis on sputum smeared slides using stepwise classification
NASA Astrophysics Data System (ADS)
Divekar, Ajay; Pangilinan, Corina; Coetzee, Gerrit; Sondh, Tarlochan; Lure, Fleming Y. M.; Kennedy, Sean
2012-03-01
Routine visual slide screening for identification of tuberculosis (TB) bacilli in stained sputum slides under microscope system is a tedious labor-intensive task and can miss up to 50% of TB. Based on the Shannon cofactor expansion on Boolean function for classification, a stepwise classification (SWC) algorithm is developed to remove different types of false positives, one type at a time, and to increase the detection of TB bacilli at different concentrations. Both bacilli and non-bacilli objects are first analyzed and classified into several different categories including scanty positive, high concentration positive, and several non-bacilli categories: small bright objects, beaded, dim elongated objects, etc. The morphological and contrast features are extracted based on aprior clinical knowledge. The SWC is composed of several individual classifiers. Individual classifier to increase the bacilli counts utilizes an adaptive algorithm based on a microbiologist's statistical heuristic decision process. Individual classifier to reduce false positive is developed through minimization from a binary decision tree to classify different types of true and false positive based on feature vectors. Finally, the detection algorithm is was tested on 102 independent confirmed negative and 74 positive cases. A multi-class task analysis shows high accordance rate for negative, scanty, and high-concentration as 88.24%, 56.00%, and 97.96%, respectively. A binary-class task analysis using a receiver operating characteristics method with the area under the curve (Az) is also utilized to analyze the performance of this detection algorithm, showing the superior detection performance on the high-concentration cases (Az=0.913) and cases mixed with high-concentration and scanty cases (Az=0.878).
Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik
2017-02-10
This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles' front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. ThedataembeddedinthelightisextractedbyfirstdetectingthepositionoftheLEDsintheseimages. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difficult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical flow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”
2016-01-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540
Image based book cover recognition and retrieval
NASA Astrophysics Data System (ADS)
Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine
2017-11-01
In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.
Automatic tracking of wake vortices using ground-wind sensor data
DOT National Transportation Integrated Search
1977-01-03
Algorithms for automatic tracking of wake vortices using ground-wind anemometer : data are developed. Methods of bad-data suppression, track initiation, and : track termination are included. An effective sensor-failure detection-and identification : ...
Hazardous gas detection for FTIR-based hyperspectral imaging system using DNN and CNN
NASA Astrophysics Data System (ADS)
Kim, Yong Chan; Yu, Hyeong-Geun; Lee, Jae-Hoon; Park, Dong-Jo; Nam, Hyun-Woo
2017-10-01
Recently, a hyperspectral imaging system (HIS) with a Fourier Transform InfraRed (FTIR) spectrometer has been widely used due to its strengths in detecting gaseous fumes. Even though numerous algorithms for detecting gaseous fumes have already been studied, it is still difficult to detect target gases properly because of atmospheric interference substances and unclear characteristics of low concentration gases. In this paper, we propose detection algorithms for classifying hazardous gases using a deep neural network (DNN) and a convolutional neural network (CNN). In both the DNN and CNN, spectral signal preprocessing, e.g., offset, noise, and baseline removal, are carried out. In the DNN algorithm, the preprocessed spectral signals are used as feature maps of the DNN with five layers, and it is trained by a stochastic gradient descent (SGD) algorithm (50 batch size) and dropout regularization (0.7 ratio). In the CNN algorithm, preprocessed spectral signals are trained with 1 × 3 convolution layers and 1 × 2 max-pooling layers. As a result, the proposed algorithms improve the classification accuracy rate by 1.5% over the existing support vector machine (SVM) algorithm for detecting and classifying hazardous gases.
A Motion Detection Algorithm Using Local Phase Information
Lazar, Aurel A.; Ukani, Nikul H.; Zhou, Yiyin
2016-01-01
Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882
Detection of dechallenge in spontaneous reporting systems: a comparison of Bayes methods.
Banu, A Bazila; Alias Balamurugan, S Appavu; Thirumalaikolundusubramanian, Ponniah
2014-01-01
Dechallenge is a response observed for the reduction or disappearance of adverse drug reactions (ADR) on withdrawal of a drug from a patient. Currently available algorithms to detect dechallenge have limitations. Hence, there is a need to compare available new methods. To detect dechallenge in Spontaneous Reporting Systems, data-mining algorithms like Naive Bayes and Improved Naive Bayes were applied for comparing the performance of the algorithms in terms of accuracy and error. Analyzing the factors of dechallenge like outcome and disease category will help medical practitioners and pharmaceutical industries to determine the reasons for dechallenge in order to take essential steps toward drug safety. Adverse drug reactions of the year 2011 and 2012 were downloaded from the United States Food and Drug Administration's database. The outcome of classification algorithms showed that Improved Naive Bayes algorithm outperformed Naive Bayes with accuracy of 90.11% and error of 9.8% in detecting the dechallenge. Detecting dechallenge for unknown samples are essential for proper prescription. To overcome the issues exposed by Naive Bayes algorithm, Improved Naive Bayes algorithm can be used to detect dechallenge in terms of higher accuracy and minimal error.
Research on improved edge extraction algorithm of rectangular piece
NASA Astrophysics Data System (ADS)
He, Yi-Bin; Zeng, Ya-Jun; Chen, Han-Xin; Xiao, San-Xia; Wang, Yan-Wei; Huang, Si-Yu
Traditional edge detection operators such as Prewitt operator, LOG operator and Canny operator, etc. cannot meet the requirements of the modern industrial measurement. This paper proposes a kind of image edge detection algorithm based on improved morphological gradient. It can be detect the image using structural elements, which deals with the characteristic information of the image directly. Choosing different shapes and sizes of structural elements to use together, the ideal image edge information can be detected. The experimental result shows that the algorithm can well extract image edge with noise, which is clearer, and has more detailed edges compared with the previous edge detection algorithm.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...algorithms we proposed improve the time e ciency signi cantly for large scale datasets. In the last chapter, we also propose an incremental reseeding...plume detection in hyper-spectral video data. These graph based clustering algorithms we proposed improve the time efficiency significantly for large
Solar Demon: near real-time solar eruptive event detection on SDO/AIA images
NASA Astrophysics Data System (ADS)
Kraaikamp, Emil; Verbeeck, Cis
Solar flares, dimmings and EUV waves have been observed routinely in extreme ultra-violet (EUV) images of the Sun since 1996. These events are closely associated with coronal mass ejections (CMEs), and therefore provide useful information for early space weather alerts. The Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) generates such a massive dataset that it becomes impossible to find most of these eruptive events manually. Solar Demon is a set of automatic detection algorithms that attempts to solve this problem by providing both near real-time warnings of eruptive events and a catalog of characterized events. Solar Demon has been designed to detect and characterize dimmings, EUV waves, as well as solar flares in near real-time on SDO/AIA data. The detection modules are running continuously at the Royal Observatory of Belgium on both quick-look data and synoptic science data. The output of Solar Demon can be accessed in near real-time on the Solar Demon website, and includes images, movies, light curves, and the numerical evolution of several parameters. Solar Demon is the result of collaboration between the FP7 projects AFFECTS and COMESEP. Flare detections of Solar Demon are integrated into the COMESEP alert system. Here we present the Solar Demon detection algorithms and their output. We will focus on the algorithm and its operational implementation. Examples of interesting flare, dimming and EUV wave events, and general statistics of the detections made so far during solar cycle 24 will be presented as well.
Toward development of mobile application for hand arthritis screening.
Akhbardeh, Farhad; Vasefi, Fartash; Tavakolian, Kouhyar; Bradley, David; Fazel-Rezai, Reza
2015-01-01
Arthritis is one of the most common health problems affecting people throughout the world. The goal of the work presented in this paper is to provide individuals, who may be developing or have developed arthritis, with a mobile application to assess and monitor the progress of their disease using their smartphone. The image processing algorithm includes finger border detection algorithm to monitor joint thickness and angular deviation abnormalities, which are common symptoms of arthritis. In this work, we have analyzed and compared gradient, thresholding and Canny algorithms for border detection. The effect of image spatial resolution (down-sampling) is also investigated. The results calculated based on 36 joint measurements show that the mean errors for gradient, thresholding, and Canny methods are 0.20, 2.13, and 2.03 mm, respectively. In addition, the average error for different image resolutions is analyzed and the minimum required resolution is determined for each method. The results confirm that recent smartphone imaging capabilities can provide enough accuracy for hand border detection and finger joint analysis based on gradient method.
Testing and Validating Machine Learning Classifiers by Metamorphic Testing☆
Xie, Xiaoyuan; Ho, Joshua W. K.; Murphy, Christian; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh
2011-01-01
Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no “test oracle” to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique “metamorphic testing”, which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program. PMID:21532969
Integrated System Health Management (ISHM) for Test Stand and J-2X Engine: Core Implementation
NASA Technical Reports Server (NTRS)
Figueroa, Jorge F.; Schmalzel, John L.; Aguilar, Robert; Shwabacher, Mark; Morris, Jon
2008-01-01
ISHM capability enables a system to detect anomalies, determine causes and effects, predict future anomalies, and provides an integrated awareness of the health of the system to users (operators, customers, management, etc.). NASA Stennis Space Center, NASA Ames Research Center, and Pratt & Whitney Rocketdyne have implemented a core ISHM capability that encompasses the A1 Test Stand and the J-2X Engine. The implementation incorporates all aspects of ISHM; from anomaly detection (e.g. leaks) to root-cause-analysis based on failure mode and effects analysis (FMEA), to a user interface for an integrated visualization of the health of the system (Test Stand and Engine). The implementation provides a low functional capability level (FCL) in that it is populated with few algorithms and approaches for anomaly detection, and root-cause trees from a limited FMEA effort. However, it is a demonstration of a credible ISHM capability, and it is inherently designed for continuous and systematic augmentation of the capability. The ISHM capability is grounded on an integrating software environment used to create an ISHM model of the system. The ISHM model follows an object-oriented approach: includes all elements of the system (from schematics) and provides for compartmentalized storage of information associated with each element. For instance, a sensor object contains a transducer electronic data sheet (TEDS) with information that might be used by algorithms and approaches for anomaly detection, diagnostics, etc. Similarly, a component, such as a tank, contains a Component Electronic Data Sheet (CEDS). Each element also includes a Health Electronic Data Sheet (HEDS) that contains health-related information such as anomalies and health state. Some practical aspects of the implementation include: (1) near real-time data flow from the test stand data acquisition system through the ISHM model, for near real-time detection of anomalies and diagnostics, (2) insertion of the J-2X predictive model providing predicted sensor values for comparison with measured values and use in anomaly detection and diagnostics, and (3) insertion of third-party anomaly detection algorithms into the integrated ISHM model.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
An Automated Energy Detection Algorithm Based on Consecutive Mean Excision
2018-01-01
present in the RF spectrum. 15. SUBJECT TERMS RF spectrum, detection threshold algorithm, consecutive mean excision, rank order filter , statistical...Median 4 3.1.9 Rank Order Filter (ROF) 4 3.1.10 Crest Factor (CF) 5 3.2 Statistical Summary 6 4. Algorithm 7 5. Conclusion 8 6. References 9...energy detection algorithm based on morphological filter processing with a semi- disk structure. Adelphi (MD): Army Research Laboratory (US); 2018 Jan
Sign Language Recognition System using Neural Network for Digital Hardware Implementation
NASA Astrophysics Data System (ADS)
Vargas, Lorena P.; Barba, Leiner; Torres, C. O.; Mattos, L.
2011-01-01
This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.
Texture orientation-based algorithm for detecting infrared maritime targets.
Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai
2015-05-20
Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.
Investigating prior probabilities in a multiple hypothesis test for use in space domain awareness
NASA Astrophysics Data System (ADS)
Hardy, Tyler J.; Cain, Stephen C.
2016-05-01
The goal of this research effort is to improve Space Domain Awareness (SDA) capabilities of current telescope systems through improved detection algorithms. Ground-based optical SDA telescopes are often spatially under-sampled, or aliased. This fact negatively impacts the detection performance of traditionally proposed binary and correlation-based detection algorithms. A Multiple Hypothesis Test (MHT) algorithm has been previously developed to mitigate the effects of spatial aliasing. This is done by testing potential Resident Space Objects (RSOs) against several sub-pixel shifted Point Spread Functions (PSFs). A MHT has been shown to increase detection performance for the same false alarm rate. In this paper, the assumption of a priori probability used in a MHT algorithm is investigated. First, an analysis of the pixel decision space is completed to determine alternate hypothesis prior probabilities. These probabilities are then implemented into a MHT algorithm, and the algorithm is then tested against previous MHT algorithms using simulated RSO data. Results are reported with Receiver Operating Characteristic (ROC) curves and probability of detection, Pd, analysis.
A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data.
Gregoire, John M; Dale, Darren; van Dover, R Bruce
2011-01-01
Peak detection is ubiquitous in the analysis of spectral data. While many noise-filtering algorithms and peak identification algorithms have been developed, recent work [P. Du, W. Kibbe, and S. Lin, Bioinformatics 22, 2059 (2006); A. Wee, D. Grayden, Y. Zhu, K. Petkovic-Duran, and D. Smith, Electrophoresis 29, 4215 (2008)] has demonstrated that both of these tasks are efficiently performed through analysis of the wavelet transform of the data. In this paper, we present a wavelet-based peak detection algorithm with user-defined parameters that can be readily applied to the application of any spectral data. Particular attention is given to the algorithm's resolution of overlapping peaks. The algorithm is implemented for the analysis of powder diffraction data, and successful detection of Bragg peaks is demonstrated for both low signal-to-noise data from theta-theta diffraction of nanoparticles and combinatorial x-ray diffraction data from a composition spread thin film. These datasets have different types of background signals which are effectively removed in the wavelet-based method, and the results demonstrate that the algorithm provides a robust method for automated peak detection.
Oginosawa, Yasushi; Kohno, Ritsuko; Honda, Toshihiro; Kikuchi, Kan; Nozoe, Masatsugu; Uchida, Takayuki; Minamiguchi, Hitoshi; Sonoda, Koichiro; Ogawa, Masahiro; Ideguchi, Takeshi; Kizaki, Yoshihisa; Nakamura, Toshihiro; Oba, Kageyuki; Higa, Satoshi; Yoshida, Keiki; Tsunoda, Soichi; Fujino, Yoshihisa; Abe, Haruhiko
2017-08-25
Shocks delivered by implanted anti-tachyarrhythmia devices, even when appropriate, lower the quality of life and survival. The new SmartShock Technology ® (SST) discrimination algorithm was developed to prevent the delivery of inappropriate shock. This prospective, multicenter, observational study compared the rate of inaccurate detection of ventricular tachyarrhythmia using the SST vs. a conventional discrimination algorithm.Methods and Results:Recipients of implantable cardioverter defibrillators (ICD) or cardiac resynchronization therapy defibrillators (CRT-D) equipped with the SST algorithm were enrolled and followed up every 6 months. The tachycardia detection rate was set at ≥150 beats/min with the SST algorithm. The primary endpoint was the time to first inaccurate detection of ventricular tachycardia (VT) with conventional vs. the SST discrimination algorithm, up to 2 years of follow-up. Between March 2012 and September 2013, 185 patients (mean age, 64.0±14.9 years; men, 74%; secondary prevention indication, 49.5%) were enrolled at 14 Japanese medical centers. Inaccurate detection was observed in 32 patients (17.6%) with the conventional, vs. in 19 patients (10.4%) with the SST algorithm. SST significantly lowered the rate of inaccurate detection by dual chamber devices (HR, 0.50; 95% CI: 0.263-0.950; P=0.034). Compared with previous algorithms, the SST discrimination algorithm significantly lowered the rate of inaccurate detection of VT in recipients of dual-chamber ICD or CRT-D.
Acquisition and use of Orlando, Florida and Continental Airbus radar flight test data
NASA Technical Reports Server (NTRS)
Eide, Michael C.; Mathews, Bruce
1992-01-01
Westinghouse is developing a lookdown pulse Doppler radar for production as the sensor and processor of a forward looking hazardous windshear detection and avoidance system. A data collection prototype of that product was ready for flight testing in Orlando to encounter low level windshear in corroboration with the FAA-Terminal Doppler Weather Radar (TDWR). Airborne real-time processing and display of the hazard factor were demonstrated with TDWR facilitated intercepts and penetrations of over 80 microbursts in a three day period, including microbursts with hazard factors in excess of .16 (with 500 ft. PIREP altitude loss) and the hazard factor display at 6 n.mi. of a visually transparent ('dry') microburst with TDWR corroborated outflow reflectivities of +5 dBz. Range gated Doppler spectrum data was recorded for subsequent development and refinement of hazard factor detection and urban clutter rejection algorithms. Following Orlando, the data collection radar was supplemental type certified for in revenue service on a Continental Airlines Airbus in an automatic and non-interferring basis with its ARINC 708 radar to allow Westinghouse to confirm its understanding of commercial aircraft installation, interface realities, and urban airport clutter. A number of software upgrades, all of which were verified at the Receiver-Transmitter-Processor (RTP) hardware bench with Orlando microburst data to produce desired advanced warning hazard factor detection, included some preliminary loads with automatic (sliding window average hazard factor) detection and annunciation recording. The current (14-APR-92) configured software is free from false and/or nuisance alerts (CAUTIONS, WARNINGS, etc.) for all take-off and landing approaches, under 2500 ft. altitude to weight-on-wheels, into all encountered airports, including Newark (NJ), LAX, Denver, Houston, Cleveland, etc. Using the Orlando data collected on hazardous microbursts, Westinghouse has developed a lookdown pulse Doppler radar product with signal and data processing algorithms which detect realistic microburst hazards and has demonstrated those algorithms produce no false alerts (or nuisance alerts) in urban airport ground moving vehicle (GMTI) and/or clutter environments.
Li, Ye; Whelan, Michael; Hobbs, Leigh; Fan, Wen Qi; Fung, Cecilia; Wong, Kenny; Marchand-Austin, Alex; Badiani, Tina; Johnson, Ian
2016-06-27
In 2014/2015, Public Health Ontario developed disease-specific, cumulative sum (CUSUM)-based statistical algorithms for detecting aberrant increases in reportable infectious disease incidence in Ontario. The objective of this study was to determine whether the prospective application of these CUSUM algorithms, based on historical patterns, have improved specificity and sensitivity compared to the currently used Early Aberration Reporting System (EARS) algorithm, developed by the US Centers for Disease Control and Prevention. A total of seven algorithms were developed for the following diseases: cyclosporiasis, giardiasis, influenza (one each for type A and type B), mumps, pertussis, invasive pneumococcal disease. Historical data were used as baseline to assess known outbreaks. Regression models were used to model seasonality and CUSUM was applied to the difference between observed and expected counts. An interactive web application was developed allowing program staff to directly interact with data and tune the parameters of CUSUM algorithms using their expertise on the epidemiology of each disease. Using these parameters, a CUSUM detection system was applied prospectively and the results were compared to the outputs generated by EARS. The outcome was the detection of outbreaks, or the start of a known seasonal increase and predicting the peak in activity. The CUSUM algorithms detected provincial outbreaks earlier than the EARS algorithm, identified the start of the influenza season in advance of traditional methods, and had fewer false positive alerts. Additionally, having staff involved in the creation of the algorithms improved their understanding of the algorithms and improved use in practice. Using interactive web-based technology to tune CUSUM improved the sensitivity and specificity of detection algorithms.
Detecting multiple moving objects in crowded environments with coherent motion regions
Cheriyadat, Anil M.; Radke, Richard J.
2013-06-11
Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.
Natural Inspired Intelligent Visual Computing and Its Application to Viticulture.
Ang, Li Minn; Seng, Kah Phooi; Ge, Feng Lu
2017-05-23
This paper presents an investigation of natural inspired intelligent computing and its corresponding application towards visual information processing systems for viticulture. The paper has three contributions: (1) a review of visual information processing applications for viticulture; (2) the development of natural inspired computing algorithms based on artificial immune system (AIS) techniques for grape berry detection; and (3) the application of the developed algorithms towards real-world grape berry images captured in natural conditions from vineyards in Australia. The AIS algorithms in (2) were developed based on a nature-inspired clonal selection algorithm (CSA) which is able to detect the arcs in the berry images with precision, based on a fitness model. The arcs detected are then extended to perform the multiple arcs and ring detectors information processing for the berry detection application. The performance of the developed algorithms were compared with traditional image processing algorithms like the circular Hough transform (CHT) and other well-known circle detection methods. The proposed AIS approach gave a Fscore of 0.71 compared with Fscores of 0.28 and 0.30 for the CHT and a parameter-free circle detection technique (RPCD) respectively.
Research on Abnormal Detection Based on Improved Combination of K - means and SVDD
NASA Astrophysics Data System (ADS)
Hao, Xiaohong; Zhang, Xiaofeng
2018-01-01
In order to improve the efficiency of network intrusion detection and reduce the false alarm rate, this paper proposes an anomaly detection algorithm based on improved K-means and SVDD. The algorithm first uses the improved K-means algorithm to cluster the training samples of each class, so that each class is independent and compact in class; Then, according to the training samples, the SVDD algorithm is used to construct the minimum superspheres. The subordinate relationship of the samples is determined by calculating the distance of the minimum superspheres constructed by SVDD. If the test sample is less than the center of the hypersphere, the test sample belongs to this class, otherwise it does not belong to this class, after several comparisons, the final test of the effective detection of the test sample.In this paper, we use KDD CUP99 data set to simulate the proposed anomaly detection algorithm. The results show that the algorithm has high detection rate and low false alarm rate, which is an effective network security protection method.
The analysis of the pilot's cognitive and decision processes
NASA Technical Reports Server (NTRS)
Curry, R. E.
1975-01-01
Articles are presented on pilot performance in zero-visibility precision approach, failure detection by pilots during automatic landing, experiments in pilot decision-making during simulated low visibility approaches, a multinomial maximum likelihood program, and a random search algorithm for laboratory computers. Other topics discussed include detection of system failures in multi-axis tasks and changes in pilot workload during an instrument landing.
Reducing false-positive detections by combining two stage-1 computer-aided mass detection algorithms
NASA Astrophysics Data System (ADS)
Bedard, Noah D.; Sampat, Mehul P.; Stokes, Patrick A.; Markey, Mia K.
2006-03-01
In this paper we present a strategy for reducing the number of false-positives in computer-aided mass detection. Our approach is to only mark "consensus" detections from among the suspicious sites identified by different "stage-1" detection algorithms. By "stage-1" we mean that each of the Computer-aided Detection (CADe) algorithms is designed to operate with high sensitivity, allowing for a large number of false positives. In this study, two mass detection methods were used: (1) Heath and Bowyer's algorithm based on the average fraction under the minimum filter (AFUM) and (2) a low-threshold bi-lateral subtraction algorithm. The two methods were applied separately to a set of images from the Digital Database for Screening Mammography (DDSM) to obtain paired sets of mass candidates. The consensus mass candidates for each image were identified by a logical "and" operation of the two CADe algorithms so as to eliminate regions of suspicion that were not independently identified by both techniques. It was shown that by combining the evidence from the AFUM filter method with that obtained from bi-lateral subtraction, the same sensitivity could be reached with fewer false-positives per image relative to using the AFUM filter alone.
Improvement of retinal blood vessel detection using morphological component analysis.
Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza
2015-03-01
Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Huang, Chengqiang; Yang, Youchang; Wu, Bo; Yu, Weize
2018-06-01
The sub-pixel arrangement of the RGBG panel and the image with RGB format are different and the algorithm that converts RGB to RGBG is urgently needed to display an image with RGB arrangement on the RGBG panel. However, the information loss is still large although color fringing artifacts are weakened in the published papers that study this conversion. In this paper, an RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error (EDMSE) is proposed. The main points of innovation include the following: (1) the edge detection is first proposed to distinguish image details with serious color fringing artifacts and image details which are prone to be lost in the process of RGB-RGBG conversion; (2) for image details with serious color fringing artifacts, the weighting factor 0.5 is applied to weaken the color fringing artifacts; and (3) for image details that are prone to be lost in the process of RGB-RGBG conversion, a special mechanism to minimize square error is proposed. The experiment shows that the color fringing artifacts are slightly improved by EDMSE, and the values of MSE of the image processed are 19.6% and 7% smaller than those of the image processed by the direct assignment and weighting factor algorithm, respectively. The proposed algorithm is implemented on a field programmable gate array to enable the image display on the RGBG panel.
Lieb, Florian; Stark, Hans-Georg; Thielemann, Christiane
2017-06-01
Spike detection from extracellular recordings is a crucial preprocessing step when analyzing neuronal activity. The decision whether a specific part of the signal is a spike or not is important for any kind of other subsequent preprocessing steps, like spike sorting or burst detection in order to reduce the classification of erroneously identified spikes. Many spike detection algorithms have already been suggested, all working reasonably well whenever the signal-to-noise ratio is large enough. When the noise level is high, however, these algorithms have a poor performance. In this paper we present two new spike detection algorithms. The first is based on a stationary wavelet energy operator and the second is based on the time-frequency representation of spikes. Both algorithms are more reliable than all of the most commonly used methods. The performance of the algorithms is confirmed by using simulated data, resembling original data recorded from cortical neurons with multielectrode arrays. In order to demonstrate that the performance of the algorithms is not restricted to only one specific set of data, we also verify the performance using a simulated publicly available data set. We show that both proposed algorithms have the best performance under all tested methods, regardless of the signal-to-noise ratio in both data sets. This contribution will redound to the benefit of electrophysiological investigations of human cells. Especially the spatial and temporal analysis of neural network communications is improved by using the proposed spike detection algorithms.
Sankari, Ziad; Adeli, Hojjat
2011-04-01
A mobile medical device, dubbed HeartSaver, is developed for real-time monitoring of a patient's electrocardiogram (ECG) and automatic detection of several cardiac pathologies, including atrial fibrillation, myocardial infarction and atrio-ventricular block. HeartSaver is based on adroit integration of four different modern technologies: electronics, wireless communication, computer, and information technologies in the service of medicine. The physical device consists of four modules: sensor and ECG processing unit, a microcontroller, a link between the microcontroller and the cell phone, and mobile software associated with the system. HeartSaver includes automated cardiac pathology detection algorithms. These algorithms are simple enough to be implemented on a low-cost, limited-power microcontroller but powerful enough to detect the relevant cardiac pathologies. When an abnormality is detected, the microcontroller sends a signal to a cell phone. This operation triggers an application software on the cell phone that sends a text message transmitting information about patient's physiological condition and location promptly to a physician or a guardian. HeartSaver can be used by millions of cardiac patients with the potential to transform the cardiac diagnosis, care, and treatment and save thousands of lives. Copyright © 2011 Elsevier Ltd. All rights reserved.
Vanishing points detection using combination of fast Hough transform and deep learning
NASA Astrophysics Data System (ADS)
Sheshkus, Alexander; Ingacheva, Anastasia; Nikolaev, Dmitry
2018-04-01
In this paper we propose a novel method for vanishing points detection based on convolutional neural network (CNN) approach and fast Hough transform algorithm. We show how to determine fast Hough transform neural network layer and how to use it in order to increase usability of the neural network approach to the vanishing point detection task. Our algorithm includes CNN with consequence of convolutional and fast Hough transform layers. We are building estimator for distribution of possible vanishing points in the image. This distribution can be used to find candidates of vanishing point. We provide experimental results from tests of suggested method using images collected from videos of road trips. Our approach shows stable result on test images with different projective distortions and noise. Described approach can be effectively implemented for mobile GPU and CPU.
Radiation Detection at Borders for Homeland Security
NASA Astrophysics Data System (ADS)
Kouzes, Richard
2004-05-01
Countries around the world are deploying radiation detection instrumentation to interdict the illegal shipment of radioactive material crossing international borders at land, rail, air, and sea ports of entry. These efforts include deployments in the US and a number of European and Asian countries by governments and international agencies. Items of concern include radiation dispersal devices (RDD), nuclear warheads, and special nuclear material (SNM). Radiation portal monitors (RPMs) are used as the main screening tool for vehicles and cargo at borders, supplemented by handheld detectors, personal radiation detectors, and x-ray imaging systems. Some cargo contains naturally occurring radioactive material (NORM) that triggers "nuisance" alarms in RPMs at these border crossings. Individuals treated with medical radiopharmaceuticals also produce nuisance alarms and can produce cross-talk between adjacent lanes of a multi-lane deployment. The operational impact of nuisance alarms can be significant at border crossings. Methods have been developed for reducing this impact without negatively affecting the requirements for interdiction of radioactive materials of interest. Plastic scintillator material is commonly used in RPMs for the detection of gamma rays from radioactive material, primarily due to the efficiency per unit cost compared to other detection materials. The resolution and lack of full-energy peaks in the plastic scintillator material prohibits detailed spectroscopy. However, the limited spectroscopic information from plastic scintillator can be exploited to provide some discrimination. Energy-based algorithms used in RPMs can effectively exploit the crude energy information available from a plastic scintillator to distinguish some NORM. Whenever NORM cargo limits the level of the alarm threshold, energy-based algorithms produce significantly better detection probabilities for small SNM sources than gross-count algorithms. This presentation discusses experience with RPMs for interdiction of radioactive materials at borders.
Acoustic change detection algorithm using an FM radio
NASA Astrophysics Data System (ADS)
Goldman, Geoffrey H.; Wolfe, Owen
2012-06-01
The U.S. Army is interested in developing low-cost, low-power, non-line-of-sight sensors for monitoring human activity. One modality that is often overlooked is active acoustics using sources of opportunity such as speech or music. Active acoustics can be used to detect human activity by generating acoustic images of an area at different times, then testing for changes among the imagery. A change detection algorithm was developed to detect physical changes in a building, such as a door changing positions or a large box being moved using acoustics sources of opportunity. The algorithm is based on cross correlating the acoustic signal measured from two microphones. The performance of the algorithm was shown using data generated with a hand-held FM radio as a sound source and two microphones. The algorithm could detect a door being opened in a hallway.
SOFIA: a flexible source finder for 3D spectral line data
NASA Astrophysics Data System (ADS)
Serra, Paolo; Westmeier, Tobias; Giese, Nadine; Jurek, Russell; Flöer, Lars; Popping, Attila; Winkel, Benjamin; van der Hulst, Thijs; Meyer, Martin; Koribalski, Bärbel S.; Staveley-Smith, Lister; Courtois, Hélène
2015-04-01
We introduce SOFIA, a flexible software application for the detection and parametrization of sources in 3D spectral line data sets. SOFIA combines for the first time in a single piece of software a set of new source-finding and parametrization algorithms developed on the way to future H I surveys with ASKAP (WALLABY, DINGO) and APERTIF. It is designed to enable the general use of these new algorithms by the community on a broad range of data sets. The key advantages of SOFIA are the ability to: search for line emission on multiple scales to detect 3D sources in a complete and reliable way, taking into account noise level variations and the presence of artefacts in a data cube; estimate the reliability of individual detections; look for signal in arbitrarily large data cubes using a catalogue of 3D coordinates as a prior; provide a wide range of source parameters and output products which facilitate further analysis by the user. We highlight the modularity of SOFIA, which makes it a flexible package allowing users to select and apply only the algorithms useful for their data and science questions. This modularity makes it also possible to easily expand SOFIA in order to include additional methods as they become available. The full SOFIA distribution, including a dedicated graphical user interface, is publicly available for download.
Tanis, Wilco; Habets, Jesse; van den Brink, Renee B A; Symersky, Petr; Budde, Ricardo P J; Chamuleau, Steven A J
2014-02-01
For acquired mechanical prosthetic heart valve (PHV) obstruction and suspicion on thrombosis, recently updated European Society of Cardiology guidelines advocate the confirmation of thrombus by transthoracic echocardiography, transesophageal echocardiography (TEE), and fluoroscopy. However, no evidence-based diagnostic algorithm is available for correct thrombus detection, although this is clinically important as fibrinolysis is contraindicated in non-thrombotic obstruction (isolated pannus). Here, we performed a review of the literature in order to propose a diagnostic algorithm. We performed a systematic search in Pubmed and Embase. Included publications were assessed on methodological quality based on the validated Quality Assessment of Diagnostic Accuracy Studies (QUADAS) II checklist. Studies were scarce (n = 15) and the majority were of moderate methodological quality. In total, 238 mechanical PHV's with acquired obstruction and a reliable reference standard were included for the evaluation of the role of fluoroscopy, echocardiography, or multidetector-row computed tomography (MDCT). In acquired PHV obstruction caused by thrombosis, mass detection by TEE and leaflet restriction detected by fluoroscopy were observed in the majority of cases (96 and 100%, respectively). In contrast, in acquired PHV obstruction free of thrombosis (pannus), leaflet restriction detected by fluoroscopy was absent in some cases (17%) and mass detection by TEE was absent in the majority of cases (66%). In case of mass detection by TEE, predictors for obstructive thrombus masses (compared with pannus masses) were leaflet restriction, soft echo density, and increased mass length. In situations of inconclusive echocardiography, MDCT may correctly detect pannus/thrombus based on the morphological aspects and localization. In acquired mechanical PHV obstruction without leaflet restriction and absent mass on TEE, obstructive PHV thrombosis cannot be confirmed and consequently, fibrinolysis is not advised. Based on the literature search and our opinion, a diagnostic algorithm is provided to correctly identify non-thrombotic PHV obstruction, which is highly relevant in daily clinical practice.
An Algorithm for Pedestrian Detection in Multispectral Image Sequences
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.
2017-05-01
The growing interest for self-driving cars provides a demand for scene understanding and obstacle detection algorithms. One of the most challenging problems in this field is the problem of pedestrian detection. Main difficulties arise from a diverse appearances of pedestrians. Poor visibility conditions such as fog and low light conditions also significantly decrease the quality of pedestrian detection. This paper presents a new optical flow based algorithm BipedDetet that provides robust pedestrian detection on a single-borad computer. The algorithm is based on the idea of simplified Kalman filtering suitable for realization on modern single-board computers. To detect a pedestrian a synthetic optical flow of the scene without pedestrians is generated using slanted-plane model. The estimate of a real optical flow is generated using a multispectral image sequence. The difference of the synthetic optical flow and the real optical flow provides the optical flow induced by pedestrians. The final detection of pedestrians is done by the segmentation of the difference of optical flows. To evaluate the BipedDetect algorithm a multispectral dataset was collected using a mobile robot.
Infrared small target detection technology based on OpenCV
NASA Astrophysics Data System (ADS)
Liu, Lei; Huang, Zhijian
2013-05-01
Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.
Infrared small target detection technology based on OpenCV
NASA Astrophysics Data System (ADS)
Liu, Lei; Huang, Zhijian
2013-09-01
Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.
Wanting Wang; John J. Qu; Xianjun Hao; Yongqiang Liu; William T. Sommers
2006-01-01
Traditional fire detection algorithms mainly rely on hot spot detection using thermal infrared (TIR) channels with fixed or contextual thresholds. Three solar reflectance channels (0.65 μm, 0.86 μm, and 2.1 μm) were recently adopted into the MODIS version 4 contextual algorithm to improve the active fire detection. In the southeastern United...
Securing mobile ad hoc networks using danger theory-based artificial immune algorithm.
Abdelhaq, Maha; Alsaqour, Raed; Abdelhaq, Shawkat
2015-01-01
A mobile ad hoc network (MANET) is a set of mobile, decentralized, and self-organizing nodes that are used in special cases, such as in the military. MANET properties render the environment of this network vulnerable to different types of attacks, including black hole, wormhole and flooding-based attacks. Flooding-based attacks are one of the most dangerous attacks that aim to consume all network resources and thus paralyze the functionality of the whole network. Therefore, the objective of this paper is to investigate the capability of a danger theory-based artificial immune algorithm called the mobile dendritic cell algorithm (MDCA) to detect flooding-based attacks in MANETs. The MDCA applies the dendritic cell algorithm (DCA) to secure the MANET with additional improvements. The MDCA is tested and validated using Qualnet v7.1 simulation tool. This work also introduces a new simulation module for a flooding attack called the resource consumption attack (RCA) using Qualnet v7.1. The results highlight the high efficiency of the MDCA in detecting RCAs in MANETs.
NASA Astrophysics Data System (ADS)
Kachach, Redouane; Cañas, José María
2016-05-01
Using video in traffic monitoring is one of the most active research domains in the computer vision community. TrafficMonitor, a system that employs a hybrid approach for automatic vehicle tracking and classification on highways using a simple stationary calibrated camera, is presented. The proposed system consists of three modules: vehicle detection, vehicle tracking, and vehicle classification. Moving vehicles are detected by an enhanced Gaussian mixture model background estimation algorithm. The design includes a technique to resolve the occlusion problem by using a combination of two-dimensional proximity tracking algorithm and the Kanade-Lucas-Tomasi feature tracking algorithm. The last module classifies the shapes identified into five vehicle categories: motorcycle, car, van, bus, and truck by using three-dimensional templates and an algorithm based on histogram of oriented gradients and the support vector machine classifier. Several experiments have been performed using both real and simulated traffic in order to validate the system. The experiments were conducted on GRAM-RTM dataset and a proper real video dataset which is made publicly available as part of this work.
Securing Mobile Ad Hoc Networks Using Danger Theory-Based Artificial Immune Algorithm
2015-01-01
A mobile ad hoc network (MANET) is a set of mobile, decentralized, and self-organizing nodes that are used in special cases, such as in the military. MANET properties render the environment of this network vulnerable to different types of attacks, including black hole, wormhole and flooding-based attacks. Flooding-based attacks are one of the most dangerous attacks that aim to consume all network resources and thus paralyze the functionality of the whole network. Therefore, the objective of this paper is to investigate the capability of a danger theory-based artificial immune algorithm called the mobile dendritic cell algorithm (MDCA) to detect flooding-based attacks in MANETs. The MDCA applies the dendritic cell algorithm (DCA) to secure the MANET with additional improvements. The MDCA is tested and validated using Qualnet v7.1 simulation tool. This work also introduces a new simulation module for a flooding attack called the resource consumption attack (RCA) using Qualnet v7.1. The results highlight the high efficiency of the MDCA in detecting RCAs in MANETs. PMID:25946001
NASA Astrophysics Data System (ADS)
Chandra, Malavika; Scheiman, James; Simeone, Diane; McKenna, Barbara; Purdy, Julianne; Mycek, Mary-Ann
2010-01-01
Pancreatic adenocarcinoma is one of the leading causes of cancer death, in part because of the inability of current diagnostic methods to reliably detect early-stage disease. We present the first assessment of the diagnostic accuracy of algorithms developed for pancreatic tissue classification using data from fiber optic probe-based bimodal optical spectroscopy, a real-time approach that would be compatible with minimally invasive diagnostic procedures for early cancer detection in the pancreas. A total of 96 fluorescence and 96 reflectance spectra are considered from 50 freshly excised tissue sites-including human pancreatic adenocarcinoma, chronic pancreatitis (inflammation), and normal tissues-on nine patients. Classification algorithms using linear discriminant analysis are developed to distinguish among tissues, and leave-one-out cross-validation is employed to assess the classifiers' performance. The spectral areas and ratios classifier (SpARC) algorithm employs a combination of reflectance and fluorescence data and has the best performance, with sensitivity, specificity, negative predictive value, and positive predictive value for correctly identifying adenocarcinoma being 85, 89, 92, and 80%, respectively.
Eggerth, Alphons; Modre-Osprian, Robert; Hayn, Dieter; Kastner, Peter; Pölzl, Gerhard; Schreier, Günter
2017-01-01
Automatic event detection is used in telemedicine based heart failure disease management programs supporting physicians and nurses in monitoring of patients' health data. Analysis of the performance of automatic event detection algorithms for prediction of HF related hospitalisations or diuretic dose increases. Rule-Of-Thumb and Moving Average Convergence Divergence (MACD) algorithm were applied to body weight data from 106 heart failure patients of the HerzMobil-Tirol disease management program. The evaluation criteria were based on Youden index and ROC curves. Analysis of data from 1460 monitoring weeks with 54 events showed a maximum Youden index of 0.19 for MACD and RoT with a specificity > 0.90. Comparison of the two algorithms for real-world monitoring data showed similar results regarding total and limited AUC. An improvement of the sensitivity might be possible by including additional health data (e.g. vital signs and self-reported well-being) because body weight variations obviously are not the only cause of HF related hospitalisations or diuretic dose increases.
Spreco, A; Eriksson, O; Dahlström, Ö; Timpka, T
2017-07-01
Methods for the detection of influenza epidemics and prediction of their progress have seldom been comparatively evaluated using prospective designs. This study aimed to perform a prospective comparative trial of algorithms for the detection and prediction of increased local influenza activity. Data on clinical influenza diagnoses recorded by physicians and syndromic data from a telenursing service were used. Five detection and three prediction algorithms previously evaluated in public health settings were calibrated and then evaluated over 3 years. When applied on diagnostic data, only detection using the Serfling regression method and prediction using the non-adaptive log-linear regression method showed acceptable performances during winter influenza seasons. For the syndromic data, none of the detection algorithms displayed a satisfactory performance, while non-adaptive log-linear regression was the best performing prediction method. We conclude that evidence was found for that available algorithms for influenza detection and prediction display satisfactory performance when applied on local diagnostic data during winter influenza seasons. When applied on local syndromic data, the evaluated algorithms did not display consistent performance. Further evaluations and research on combination of methods of these types in public health information infrastructures for 'nowcasting' (integrated detection and prediction) of influenza activity are warranted.
Image classification of unlabeled malaria parasites in red blood cells.
Zheng Zhang; Ong, L L Sharon; Kong Fang; Matthew, Athul; Dauwels, Justin; Ming Dao; Asada, Harry
2016-08-01
This paper presents a method to detect unlabeled malaria parasites in red blood cells. The current "gold standard" for malaria diagnosis is microscopic examination of thick blood smear, a time consuming process requiring extensive training. Our goal is to develop an automate process to identify malaria infected red blood cells. Major issues in automated analysis of microscopy images of unstained blood smears include overlapping cells and oddly shaped cells. Our approach creates robust templates to detect infected and uninfected red cells. Histogram of Oriented Gradients (HOGs) features are extracted from templates and used to train a classifier offline. Next, the ViolaJones object detection framework is applied to detect infected and uninfected red cells and the image background. Results show our approach out-performs classification approaches with PCA features by 50% and cell detection algorithms applying Hough transforms by 24%. Majority of related work are designed to automatically detect stained parasites in blood smears where the cells are fixed. Although it is more challenging to design algorithms for unstained parasites, our methods will allow analysis of parasite progression in live cells under different drug treatments.
Using Gaussian mixture models to detect and classify dolphin whistles and pulses.
Peso Parada, Pablo; Cardenal-López, Antonio
2014-06-01
In recent years, a number of automatic detection systems for free-ranging cetaceans have been proposed that aim to detect not just surfaced, but also submerged, individuals. These systems are typically based on pattern-recognition techniques applied to underwater acoustic recordings. Using a Gaussian mixture model, a classification system was developed that detects sounds in recordings and classifies them as one of four types: background noise, whistles, pulses, and combined whistles and pulses. The classifier was tested using a database of underwater recordings made off the Spanish coast during 2011. Using cepstral-coefficient-based parameterization, a sound detection rate of 87.5% was achieved for a 23.6% classification error rate. To improve these results, two parameters computed using the multiple signal classification algorithm and an unpredictability measure were included in the classifier. These parameters, which helped to classify the segments containing whistles, increased the detection rate to 90.3% and reduced the classification error rate to 18.1%. Finally, the potential of the multiple signal classification algorithm and unpredictability measure for estimating whistle contours and classifying cetacean species was also explored, with promising results.
Breadth-First Search-Based Single-Phase Algorithms for Bridge Detection in Wireless Sensor Networks
Akram, Vahid Khalilpour; Dagdeviren, Orhan
2013-01-01
Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930
Data Driven, Force Based Interaction for Quadrotors
NASA Astrophysics Data System (ADS)
McKinnon, Christopher D.
Quadrotors are small and agile, and are becoming more capable for their compact size. They are expected perform a wide variety of tasks including inspection, physical interaction, and formation flight. In all of these tasks, the quadrotors can come into close proximity with infrastructure or other quadrotors, and may experience significant external forces and torques. Reacting properly in each case is essential to completing the task safely and effectively. In this thesis, we develop an algorithm, based on the Unscented Kalman Filter, to estimate such forces and torques without making assumptions about the source of the forces and torques. We then show in experiment how the proposed estimation algorithm can be used in conjunction with controls and machine learning to choose the appropriate actions in a wide variety of tasks including detecting downwash, tracking the wind induced by a fan, and detecting proximity to the wall.
Video fingerprinting for copy identification: from research to industry applications
NASA Astrophysics Data System (ADS)
Lu, Jian
2009-02-01
Research that began a decade ago in video copy detection has developed into a technology known as "video fingerprinting". Today, video fingerprinting is an essential and enabling tool adopted by the industry for video content identification and management in online video distribution. This paper provides a comprehensive review of video fingerprinting technology and its applications in identifying, tracking, and managing copyrighted content on the Internet. The review includes a survey on video fingerprinting algorithms and some fundamental design considerations, such as robustness, discriminability, and compactness. It also discusses fingerprint matching algorithms, including complexity analysis, and approximation and optimization for fast fingerprint matching. On the application side, it provides an overview of a number of industry-driven applications that rely on video fingerprinting. Examples are given based on real-world systems and workflows to demonstrate applications in detecting and managing copyrighted content, and in monitoring and tracking video distribution on the Internet.
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Lacy, Fred; Carriere, Patrick
2015-05-01
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
A robust human face detection algorithm
NASA Astrophysics Data System (ADS)
Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.
2012-01-01
Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.
Lee, Junghoon; Lee, Joosung; Song, Sangha; Lee, Hyunsook; Lee, Kyoungjoung; Yoon, Youngro
2008-01-01
Automatic detection of suspicious pain regions is very useful in the medical digital infrared thermal imaging research area. To detect those regions, we use the SOFES (Survival Of the Fitness kind of the Evolution Strategy) algorithm which is one of the multimodal function optimization methods. We apply this algorithm to famous diseases, such as a foot of the glycosuria, the degenerative arthritis and the varicose vein. The SOFES algorithm is available to detect some hot spots or warm lines as veins. And according to a hundred of trials, the algorithm is very fast to converge.
An improved algorithm for wildfire detection
NASA Astrophysics Data System (ADS)
Nakau, K.
2010-12-01
Satellite information of wild fire location has strong demands from society. Therefore, Understanding such demands is quite important to consider what to improve the wild fire detection algorithm. Interviews and considerations imply that the most important improvements are geographical resolution of the wildfire product and classification of fire; smoldering or flaming. Discussion with fire service agencies are performed with fire service agencies in Alaska and fire service volunteer groups in Indonesia. Alaska Fire Service (AFS) makes 3D-map overlaid by fire location every morning. Then, this 3D-map is examined by leaders of fire service teams to decide their strategy to fighting against wild fire. Especially, firefighters of both agencies seek the best walk path to approach the fire. Because of mountainous landscape, geospatial resolution is quite important for them. For example, walking in bush for 1km, as same as one pixel of fire product, is very tough for firefighters. Also, in case of remote wild fire, fire service agencies utilize satellite information to decide when to have a flight observation to confirm the status; expanding, flaming, smoldering or out. Therefore, it is also quite important to provide the classification of fire; flaming or smoldering. Not only the aspect of disaster management, wildfire emits huge amount of carbon into atmosphere as much as one quarter to one half of CO2 by fuel combustion (IPCC AR4). Reduction of the CO2 emission by human caused wildfire is important. To estimate carbon emission from wildfire, special resolution is quite important. To improve sensitivity of wild fire detection, author adopts radiance based wildfire detection. Different from the existing brightness temperature approach, we can easily consider reflectance of background land coverage. Especially for GCOM-C1/SGLI, band to detect fire with 250m resolution is 1.6μm wavelength. In this band, we have much more sunlight reflection. Therefore, we need to consider the way to cancel sunlight reflection. In this study, author utilizes simple linear correction for estimation of infrared emission considering sunlight reflection. As well as bran new core part of wildfire algorithm, we need to eliminate bright reflectance matters, including cloud, desert and sun glint. Also, we need to eliminate the false alarms at coastal area for difference of surface temperature between land and ocean. An existing algorithm MOD14 has same procedure, however, some of these ancillary parts are newly introduced or improved. Snow mask is newly introduced to reduce a bright reflectance with snow and ice covered area. Also, the improved ancillary parts include candidate selection of fire pixel, cloud mask, water body mask. With these improvements, wildfire with dense smoke or wildfire under thin cloud could be detected by this algorithm. This wild fire product is not validated by ground observations yet. However, distribution is well corresponded with wildfire location in same periods. Unfortunately, this algorithm also detects false alarm in urban area same as existing one. This should be corrected adopting other bands. Current algorithm will be performed in JASMES website.
Biased normalized cuts for target detection in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Xuewen; Dorado-Munoz, Leidy P.; Messinger, David W.; Cahill, Nathan D.
2016-05-01
The Biased Normalized Cuts (BNC) algorithm is a useful technique for detecting targets or objects in RGB imagery. In this paper, we propose modifying BNC for the purpose of target detection in hyperspectral imagery. As opposed to other target detection algorithms that typically encode target information prior to dimensionality reduction, our proposed algorithm encodes target information after dimensionality reduction, enabling a user to detect different targets in interactive mode. To assess the proposed BNC algorithm, we utilize hyperspectral imagery (HSI) from the SHARE 2012 data campaign, and we explore the relationship between the number and the position of expert-provided target labels and the precision/recall of the remaining targets in the scene.
Jung, Jaehoon; Yoon, Inhye; Paik, Joonki
2016-01-01
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978
Sokoll, Stefan; Tönnies, Klaus; Heine, Martin
2012-01-01
In this paper we present an algorithm for the detection of spontaneous activity at individual synapses in microscopy images. By employing the optical marker pHluorin, we are able to visualize synaptic vesicle release with a spatial resolution in the nm range in a non-invasive manner. We compute individual synaptic signals from automatically segmented regions of interest and detect peaks that represent synaptic activity using a continuous wavelet transform based algorithm. As opposed to standard peak detection algorithms, we employ multiple wavelets to match all relevant features of the peak. We evaluate our multiple wavelet algorithm (MWA) on real data and assess the performance on synthetic data over a wide range of signal-to-noise ratios.
A service relation model for web-based land cover change detection
NASA Astrophysics Data System (ADS)
Xing, Huaqiao; Chen, Jun; Wu, Hao; Zhang, Jun; Li, Songnian; Liu, Boyu
2017-10-01
Change detection with remotely sensed imagery is a critical step in land cover monitoring and updating. Although a variety of algorithms or models have been developed, none of them can be universal for all cases. The selection of appropriate algorithms and construction of processing workflows depend largely on the expertise of experts about the "algorithm-data" relations among change detection algorithms and the imagery data used. This paper presents a service relation model for land cover change detection by integrating the experts' knowledge about the "algorithm-data" relations into the web-based geo-processing. The "algorithm-data" relations are mapped into a set of web service relations with the analysis of functional and non-functional service semantics. These service relations are further classified into three different levels, i.e., interface, behavior and execution levels. A service relation model is then established using the Object and Relation Diagram (ORD) approach to represent the multi-granularity services and their relations for change detection. A set of semantic matching rules are built and used for deriving on-demand change detection service chains from the service relation model. A web-based prototype system is developed in .NET development environment, which encapsulates nine change detection and pre-processing algorithms and represents their service relations as an ORD. Three test areas from Shandong and Hebei provinces, China with different imagery conditions are selected for online change detection experiments, and the results indicate that on-demand service chains can be generated according to different users' demands.
Árbol, Javier Rodríguez; Perakakis, Pandelis; Garrido, Alba; Mata, José Luis; Fernández-Santaella, M Carmen; Vila, Jaime
2017-03-01
The preejection period (PEP) is an index of left ventricle contractility widely used in psychophysiological research. Its computation requires detecting the moment when the aortic valve opens, which coincides with the B point in the first derivative of impedance cardiogram (ICG). Although this operation has been traditionally made via visual inspection, several algorithms based on derivative calculations have been developed to enable an automatic performance of the task. However, despite their popularity, data about their empirical validation are not always available. The present study analyzes the performance in the estimation of the aortic valve opening of three popular algorithms, by comparing their performance with the visual detection of the B point made by two independent scorers. Algorithm 1 is based on the first derivative of the ICG, Algorithm 2 on the second derivative, and Algorithm 3 on the third derivative. Algorithm 3 showed the highest accuracy rate (78.77%), followed by Algorithm 1 (24.57%) and Algorithm 2 (13.82%). In the automatic computation of PEP, Algorithm 2 resulted in significantly more missed cycles (48.57%) than Algorithm 1 (6.3%) and Algorithm 3 (3.5%). Algorithm 2 also estimated a significantly lower average PEP (70 ms), compared with the values obtained by Algorithm 1 (119 ms) and Algorithm 3 (113 ms). Our findings indicate that the algorithm based on the third derivative of the ICG performs significantly better. Nevertheless, a visual inspection of the signal proves indispensable, and this article provides a novel visual guide to facilitate the manual detection of the B point. © 2016 Society for Psychophysiological Research.
An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing
NASA Astrophysics Data System (ADS)
Zhao, Yunji; Pei, Hailong
In vision-based autonomous landing system of UAV, the efficiency of target detecting and tracking will directly affect the control system. The improved algorithm of SURF(Speed Up Robust Features) will resolve the problem which is the inefficiency of the SURF algorithm in the autonomous landing system. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift; second, detect the feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. The results of experiment and theoretical analysis testify the efficiency of the algorithm.
Spatial and Temporal Varying Thresholds for Cloud Detection in Satellite Imagery
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Haines, Stephanie
2007-01-01
A new cloud detection technique has been developed and applied to both geostationary and polar orbiting satellite imagery having channels in the thermal infrared and short wave infrared spectral regions. The bispectral composite threshold (BCT) technique uses only the 11 micron and 3.9 micron channels, and composite imagery generated from these channels, in a four-step cloud detection procedure to produce a binary cloud mask at single pixel resolution. A unique aspect of this algorithm is the use of 20-day composites of the 11 micron and the 11 - 3.9 micron channel difference imagery to represent spatially and temporally varying clear-sky thresholds for the bispectral cloud tests. The BCT cloud detection algorithm has been applied to GOES and MODIS data over the continental United States over the last three years with good success. The resulting products have been validated against "truth" datasets (generated by the manual determination of the sky conditions from available satellite imagery) for various seasons from the 2003-2005 periods. The day and night algorithm has been shown to determine the correct sky conditions 80-90% of the time (on average) over land and ocean areas. Only a small variation in algorithm performance occurs between day-night, land-ocean, and between seasons. The algorithm performs least well. during he winter season with only 80% of the sky conditions determined correctly. The algorithm was found to under-determine clouds at night and during times of low sun angle (in geostationary satellite data) and tends to over-determine the presence of clouds during the day, particularly in the summertime. Since the spectral tests use only the short- and long-wave channels common to most multispectral scanners; the application of the BCT technique to a variety of satellite sensors including SEVERI should be straightforward and produce similar performance results.
Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun
2018-01-01
The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.
Algorithm for Identifying Erroneous Rain-Gauge Readings
NASA Technical Reports Server (NTRS)
Rickman, Doug
2005-01-01
An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.
Detection and classification of concealed weapons using a magnetometer-based portal
NASA Astrophysics Data System (ADS)
Kotter, Dale K.; Roybal, Lyle G.; Polk, Robert E.
2002-08-01
A concealed weapons detection technology was developed through the support of the National Institute of Justice (NIJ) to provide a non intrusive means for rapid detection, location, and archiving of data (including visual) of potential suspects and weapon threats. This technology, developed by the Idaho National Engineering and Environmental Laboratory (INEEL), has been applied in a portal style weapons detection system using passive magnetic sensors as its basis. This paper will report on enhancements to the weapon detection system to enable weapon classification and to discriminate threats from non-threats. Advanced signal processing algorithms were used to analyze the magnetic spectrum generated when a person passes through a portal. These algorithms analyzed multiple variables including variance in the magnetic signature from random weapon placement and/or orientation. They perform pattern recognition and calculate the probability that the collected magnetic signature correlates to a known database of weapon versus non-weapon responses. Neural networks were used to further discriminate weapon type and identify controlled electronic items such as cell phones and pagers. False alarms were further reduced by analyzing the magnetic detector response by using a Joint Time Frequency Analysis digital signal processing technique. The frequency components and power spectrum for a given sensor response were derived. This unique fingerprint provided additional information to aid in signal analysis. This technology has the potential to produce major improvements in weapon detection and classification.
Leveraging disjoint communities for detecting overlapping community structure
NASA Astrophysics Data System (ADS)
Chakraborty, Tanmoy
2015-05-01
Network communities represent mesoscopic structure for understanding the organization of real-world networks, where nodes often belong to multiple communities and form overlapping community structure in the network. Due to non-triviality in finding the exact boundary of such overlapping communities, this problem has become challenging, and therefore huge effort has been devoted to detect overlapping communities from the network. In this paper, we present PVOC (Permanence based Vertex-replication algorithm for Overlapping Community detection), a two-stage framework to detect overlapping community structure. We build on a novel observation that non-overlapping community structure detected by a standard disjoint community detection algorithm from a network has high resemblance with its actual overlapping community structure, except the overlapping part. Based on this observation, we posit that there is perhaps no need of building yet another overlapping community finding algorithm; but one can efficiently manipulate the output of any existing disjoint community finding algorithm to obtain the required overlapping structure. We propose a new post-processing technique that by combining with any existing disjoint community detection algorithm, can suitably process each vertex using a new vertex-based metric, called permanence, and thereby finds out overlapping candidates with their community memberships. Experimental results on both synthetic and large real-world networks show that PVOC significantly outperforms six state-of-the-art overlapping community detection algorithms in terms of high similarity of the output with the ground-truth structure. Thus our framework not only finds meaningful overlapping communities from the network, but also allows us to put an end to the constant effort of building yet another overlapping community detection algorithm.
NASA Astrophysics Data System (ADS)
Lee, Sangkyu
Illicit trafficking and smuggling of radioactive materials and special nuclear materials (SNM) are considered as one of the most important recent global nuclear threats. Monitoring the transport and safety of radioisotopes and SNM are challenging due to their weak signals and easy shielding. Great efforts worldwide are focused at developing and improving the detection technologies and algorithms, for accurate and reliable detection of radioisotopes of interest in thus better securing the borders against nuclear threats. In general, radiation portal monitors enable detection of gamma and neutron emitting radioisotopes. Passive or active interrogation techniques, present and/or under the development, are all aimed at increasing accuracy, reliability, and in shortening the time of interrogation as well as the cost of the equipment. Equally important efforts are aimed at advancing algorithms to process the imaging data in an efficient manner providing reliable "readings" of the interiors of the examined volumes of various sizes, ranging from cargos to suitcases. The main objective of this thesis is to develop two synergistic algorithms with the goal to provide highly reliable - low noise identification of radioisotope signatures. These algorithms combine analysis of passive radioactive detection technique with active interrogation imaging techniques such as gamma radiography or muon tomography. One algorithm consists of gamma spectroscopy and cosmic muon tomography, and the other algorithm is based on gamma spectroscopy and gamma radiography. The purpose of fusing two detection methodologies per algorithm is to find both heavy-Z radioisotopes and shielding materials, since radionuclides can be identified with gamma spectroscopy, and shielding materials can be detected using muon tomography or gamma radiography. These combined algorithms are created and analyzed based on numerically generated images of various cargo sizes and materials. In summary, the three detection methodologies are fused into two algorithms with mathematical functions providing: reliable identification of radioisotopes in gamma spectroscopy; noise reduction and precision enhancement in muon tomography; and the atomic number and density estimation in gamma radiography. It is expected that these new algorithms maybe implemented at portal scanning systems with the goal to enhance the accuracy and reliability in detecting nuclear materials inside the cargo containers.
Searching Information Sources in Networks
2017-06-14
SECURITY CLASSIFICATION OF: During the course of this project, we made significant progresses in multiple directions of the information detection...result on information source detection on non-tree networks; (2) The development of information source localization algorithms to detect multiple... information sources. The algorithms have provable performance guarantees and outperform existing algorithms in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND
Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks
2015-03-16
moderately-sized networks. As a consequence, throughout this effort, a simulated annealing (SA) algorithm will be employed to effectively search the...then increment k by 1 and repeat the search to find z∗3. Once can continue to increment k until W < zδ, at which point the algorithm will stop and...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources
NASA Astrophysics Data System (ADS)
Gui, Chun; Zhang, Ruisheng; Zhao, Zhili; Wei, Jiaxuan; Hu, Rongjing
In order to deal with stochasticity in center node selection and instability in community detection of label propagation algorithm, this paper proposes an improved label propagation algorithm named label propagation algorithm based on community belonging degree (LPA-CBD) that employs community belonging degree to determine the number and the center of community. The general process of LPA-CBD is that the initial community is identified by the nodes with the maximum degree, and then it is optimized or expanded by community belonging degree. After getting the rough structure of network community, the remaining nodes are labeled by using label propagation algorithm. The experimental results on 10 real-world networks and three synthetic networks show that LPA-CBD achieves reasonable community number, better algorithm accuracy and higher modularity compared with other four prominent algorithms. Moreover, the proposed algorithm not only has lower algorithm complexity and higher community detection quality, but also improves the stability of the original label propagation algorithm.
A novel algorithm for notch detection
NASA Astrophysics Data System (ADS)
Acosta, C.; Salazar, D.; Morales, D.
2013-06-01
It is common knowledge that DFM guidelines require revisions to design data. These guidelines impose the need for corrections inserted into areas within the design data flow. At times, this requires rather drastic modifications to the data, both during the layer derivation or DRC phase, and especially within the RET phase. For example, OPC. During such data transformations, several polygon geometry changes are introduced, which can substantially increase shot count, geometry complexity, and eventually conversion to mask writer machine formats. In this resulting complex data, it may happen that notches are found that do not significantly contribute to the final manufacturing results, but do in fact contribute to the complexity of the surrounding geometry, and are therefore undesirable. Additionally, there are cases in which the overall figure count can be reduced with minimum impact in the quality of the corrected data, if notches are detected and corrected. Case in point, there are other cases where data quality could be improved if specific valley notches are filled in, or peak notches are cut out. Such cases generally satisfy specific geometrical restrictions in order to be valid candidates for notch correction. Traditional notch detection has been done for rectilinear data (Manhattan-style) and only in axis-parallel directions. The traditional approaches employ dimensional measurement algorithms that measure edge distances along the outside of polygons. These approaches are in general adaptations, and therefore ill-fitted for generalized detection of notches with strange shapes and in strange rotations. This paper covers a novel algorithm developed for the CATS MRCC tool that finds both valley and/or peak notches that are candidates for removal. The algorithm is generalized and invariant to data rotation, so that it can find notches in data rotated in any angle. It includes parameters to control the dimensions of detected notches, as well as algorithm tolerances and data reach.
QuateXelero: An Accelerated Exact Network Motif Detection Algorithm
Khakabimamaghani, Sahand; Sharafuddin, Iman; Dichter, Norbert; Koch, Ina; Masoudi-Nejad, Ali
2013-01-01
Finding motifs in biological, social, technological, and other types of networks has become a widespread method to gain more knowledge about these networks’ structure and function. However, this task is very computationally demanding, because it is highly associated with the graph isomorphism which is an NP problem (not known to belong to P or NP-complete subsets yet). Accordingly, this research is endeavoring to decrease the need to call NAUTY isomorphism detection method, which is the most time-consuming step in many existing algorithms. The work provides an extremely fast motif detection algorithm called QuateXelero, which has a Quaternary Tree data structure in the heart. The proposed algorithm is based on the well-known ESU (FANMOD) motif detection algorithm. The results of experiments on some standard model networks approve the overal superiority of the proposed algorithm, namely QuateXelero, compared with two of the fastest existing algorithms, G-Tries and Kavosh. QuateXelero is especially fastest in constructing the central data structure of the algorithm from scratch based on the input network. PMID:23874498
Anomaly Detection in Large Sets of High-Dimensional Symbol Sequences
NASA Technical Reports Server (NTRS)
Budalakoti, Suratna; Srivastava, Ashok N.; Akella, Ram; Turkov, Eugene
2006-01-01
This paper addresses the problem of detecting and describing anomalies in large sets of high-dimensional symbol sequences. The approach taken uses unsupervised clustering of sequences using the normalized longest common subsequence (LCS) as a similarity measure, followed by detailed analysis of outliers to detect anomalies. As the LCS measure is expensive to compute, the first part of the paper discusses existing algorithms, such as the Hunt-Szymanski algorithm, that have low time-complexity. We then discuss why these algorithms often do not work well in practice and present a new hybrid algorithm for computing the LCS that, in our tests, outperforms the Hunt-Szymanski algorithm by a factor of five. The second part of the paper presents new algorithms for outlier analysis that provide comprehensible indicators as to why a particular sequence was deemed to be an outlier. The algorithms provide a coherent description to an analyst of the anomalies in the sequence, compared to more normal sequences. The algorithms we present are general and domain-independent, so we discuss applications in related areas such as anomaly detection.
Target-type probability combining algorithms for multisensor tracking
NASA Astrophysics Data System (ADS)
Wigren, Torbjorn
2001-08-01
Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.
Xiaofeng Yang; Guanghao Sun; Ishibashi, Koichiro
2017-07-01
The non-contact measurement of the respiration rate (RR) and heart rate (HR) using a Doppler radar has attracted more attention in the field of home healthcare monitoring, due to the extremely low burden on patients, unconsciousness and unconstraint. Most of the previous studies have performed the frequency-domain analysis of radar signals to detect the respiration and heartbeat frequency. However, these procedures required long period time (approximately 30 s) windows to obtain a high-resolution spectrum. In this study, we propose a time-domain peak detection algorithm for the fast acquisition of the RR and HR within a breathing cycle (approximately 5 s), including inhalation and exhalation. Signal pre-processing using an analog band-pass filter (BPF) that extracts respiration and heartbeat signals was performed. Thereafter, the HR and RR were calculated using a peak position detection method, which was carried out via LABVIEW. To evaluate the measurement accuracy, we measured the HR and RR of seven subjects in the laboratory. As a reference of HR and RR, the persons wore contact sensors i.e., an electrocardiograph (ECG) and a respiration band. The time domain peak-detection algorithm, based on the Doppler radar, exhibited a significant correlation coefficient of HR of 0.92 and a correlation coefficient of RR of 0.99, between the ECG and respiration band, respectively.
On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.
McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D
2016-01-08
We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.
Robust Kalman filter design for predictive wind shear detection
NASA Technical Reports Server (NTRS)
Stratton, Alexander D.; Stengel, Robert F.
1991-01-01
Severe, low-altitude wind shear is a threat to aviation safety. Airborne sensors under development measure the radial component of wind along a line directly in front of an aircraft. In this paper, optimal estimation theory is used to define a detection algorithm to warn of hazardous wind shear from these sensors. To achieve robustness, a wind shear detection algorithm must distinguish threatening wind shear from less hazardous gustiness, despite variations in wind shear structure. This paper presents statistical analysis methods to refine wind shear detection algorithm robustness. Computational methods predict the ability to warn of severe wind shear and avoid false warning. Comparative capability of the detection algorithm as a function of its design parameters is determined, identifying designs that provide robust detection of severe wind shear.
Real-time ECG monitoring and arrhythmia detection using Android-based mobile devices.
Gradl, Stefan; Kugler, Patrick; Lohmuller, Clemens; Eskofier, Bjoern
2012-01-01
We developed an application for Android™-based mobile devices that allows real-time electrocardiogram (ECG) monitoring and automated arrhythmia detection by analyzing ECG parameters. ECG data provided by pre-recorded files or acquired live by accessing a Shimmer™ sensor node via Bluetooth™ can be processed and evaluated. The application is based on the Pan-Tompkins algorithm for QRS-detection and contains further algorithm blocks to detect abnormal heartbeats. The algorithm was validated using the MIT-BIH Arrhythmia and MIT-BIH Supraventricular Arrhythmia databases. More than 99% of all QRS complexes were detected correctly by the algorithm. Overall sensitivity for abnormal beat detection was 89.5% with a specificity of 80.6%. The application is available for download and may be used for real-time ECG-monitoring on mobile devices.
Chatlapalli, S; Nazeran, H; Melarkod, V; Krishnam, R; Estrada, E; Pamula, Y; Cabrera, S
2004-01-01
The electrocardiogram (ECG) signal is used extensively as a low cost diagnostic tool to provide information concerning the heart's state of health. Accurate determination of the QRS complex, in particular, reliable detection of the R wave peak, is essential in computer based ECG analysis. ECG data from Physionet's Sleep-Apnea database were used to develop, test, and validate a robust heart rate variability (HRV) signal derivation algorithm. The HRV signal was derived from pre-processed ECG signals by developing an enhanced Hilbert transform (EHT) algorithm with built-in missing beat detection capability for reliable QRS detection. The performance of the EHT algorithm was then compared against that of a popular Hilbert transform-based (HT) QRS detection algorithm. Autoregressive (AR) modeling of the HRV power spectrum for both EHT- and HT-derived HRV signals was achieved and different parameters from their power spectra as well as approximate entropy were derived for comparison. Poincare plots were then used as a visualization tool to highlight the detection of the missing beats in the EHT method After validation of the EHT algorithm on ECG data from the Physionet, the algorithm was further tested and validated on a dataset obtained from children undergoing polysomnography for detection of sleep disordered breathing (SDB). Sensitive measures of accurate HRV signals were then derived to be used in detecting and diagnosing sleep disordered breathing in children. All signal processing algorithms were implemented in MATLAB. We present a description of the EHT algorithm and analyze pilot data for eight children undergoing nocturnal polysomnography. The pilot data demonstrated that the EHT method provides an accurate way of deriving the HRV signal and plays an important role in extraction of reliable measures to distinguish between periods of normal and sleep disordered breathing (SDB) in children.
Algorithm for detection the QRS complexes based on support vector machine
NASA Astrophysics Data System (ADS)
Van, G. V.; Podmasteryev, K. V.
2017-11-01
The efficiency of computer ECG analysis depends on the accurate detection of QRS-complexes. This paper presents an algorithm for QRS complex detection based of support vector machine (SVM). The proposed algorithm is evaluated on annotated standard databases such as MIT-BIH Arrhythmia database. The QRS detector obtained a sensitivity Se = 98.32% and specificity Sp = 95.46% for MIT-BIH Arrhythmia database. This algorithm can be used as the basis for the software to diagnose electrical activity of the heart.
Aiding the Detection of QRS Complex in ECG Signals by Detecting S Peaks Independently.
Sabherwal, Pooja; Singh, Latika; Agrawal, Monika
2018-03-30
In this paper, a novel algorithm for the accurate detection of QRS complex by combining the independent detection of R and S peaks, using fusion algorithm is proposed. R peak detection has been extensively studied and is being used to detect the QRS complex. Whereas, S peaks, which is also part of QRS complex can be independently detected to aid the detection of QRS complex. In this paper, we suggest a method to first estimate S peak from raw ECG signal and then use them to aid the detection of QRS complex. The amplitude of S peak in ECG signal is relatively weak than corresponding R peak, which is traditionally used for the detection of QRS complex, therefore, an appropriate digital filter is designed to enhance the S peaks. These enhanced S peaks are then detected by adaptive thresholding. The algorithm is validated on all the signals of MIT-BIH arrhythmia database and noise stress database taken from physionet.org. The algorithm performs reasonably well even for the signals highly corrupted by noise. The algorithm performance is confirmed by sensitivity and positive predictivity of 99.99% and the detection accuracy of 99.98% for QRS complex detection. The number of false positives and false negatives resulted while analysis has been drastically reduced to 80 and 42 against the 98 and 84 the best results reported so far.
Enhancement of Fast Face Detection Algorithm Based on a Cascade of Decision Trees
NASA Astrophysics Data System (ADS)
Khryashchev, V. V.; Lebedev, A. A.; Priorov, A. L.
2017-05-01
Face detection algorithm based on a cascade of ensembles of decision trees (CEDT) is presented. The new approach allows detecting faces other than the front position through the use of multiple classifiers. Each classifier is trained for a specific range of angles of the rotation head. The results showed a high rate of productivity for CEDT on images with standard size. The algorithm increases the area under the ROC-curve of 13% compared to a standard Viola-Jones face detection algorithm. Final realization of given algorithm consist of 5 different cascades for frontal/non-frontal faces. One more thing which we take from the simulation results is a low computational complexity of CEDT algorithm in comparison with standard Viola-Jones approach. This could prove important in the embedded system and mobile device industries because it can reduce the cost of hardware and make battery life longer.
Karayiannis, Nicolaos B; Mukherjee, Amit; Glover, John R; Ktonas, Periklis Y; Frost, James D; Hrachovy, Richard A; Mizrahi, Eli M
2006-04-01
This paper presents an approach to detect epileptic seizure segments in the neonatal electroencephalogram (EEG) by characterizing the spectral features of the EEG waveform using a rule-based algorithm cascaded with a neural network. A rule-based algorithm screens out short segments of pseudosinusoidal EEG patterns as epileptic based on features in the power spectrum. The output of the rule-based algorithm is used to train and compare the performance of conventional feedforward neural networks and quantum neural networks. The results indicate that the trained neural networks, cascaded with the rule-based algorithm, improved the performance of the rule-based algorithm acting by itself. The evaluation of the proposed cascaded scheme for the detection of pseudosinusoidal seizure segments reveals its potential as a building block of the automated seizure detection system under development.
Improved space object detection using short-exposure image data with daylight background.
Becker, David; Cain, Stephen
2018-05-10
Space object detection is of great importance in the highly dependent yet competitive and congested space domain. The detection algorithms employed play a crucial role in fulfilling the detection component in the space situational awareness mission to detect, track, characterize, and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator on long-exposure data to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follow a Gaussian distribution. Long-exposure imaging is critical to detection performance in these algorithms; however, for imaging under daylight conditions, it becomes necessary to create a long-exposure image as the sum of many short-exposure images. This paper explores the potential for increasing detection capabilities for small and dim space objects in a stack of short-exposure images dominated by a bright background. The algorithm proposed in this paper improves the traditional stack and average method of forming a long-exposure image by selectively removing short-exposure frames of data that do not positively contribute to the overall signal-to-noise ratio of the averaged image. The performance of the algorithm is compared to a traditional matched filter detector using data generated in MATLAB as well as laboratory-collected data. The results are illustrated on a receiver operating characteristic curve to highlight the increased probability of detection associated with the proposed algorithm.
Revolution in Detection Affairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stern W.
The detection of nuclear or radioactive materials for homeland or national security purposes is inherently difficult. This is one reason detection efforts must be seen as just one part of an overall nuclear defense strategy which includes, inter alia, material security, detection, interdiction, consequence management and recovery. Nevertheless, one could argue that there has been a revolution in detection affairs in the past several decades as the innovative application of new technology has changed the character and conduct of detection operations. This revolution will likely be most effectively reinforced in the coming decades with the networking of detectors and innovativemore » application of anomaly detection algorithms.« less
Revolution in nuclear detection affairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stern, Warren M.
The detection of nuclear or radioactive materials for homeland or national security purposes is inherently difficult. This is one reason detection efforts must be seen as just one part of an overall nuclear defense strategy which includes, inter alia, material security, detection, interdiction, consequence management and recovery. Nevertheless, one could argue that there has been a revolution in detection affairs in the past several decades as the innovative application of new technology has changed the character and conduct of detection operations. This revolution will likely be most effectively reinforced in the coming decades with the networking of detectors and innovativemore » application of anomaly detection algorithms.« less
Van Hise, Christopher B; Greenslade, Jaimi H; Parsonage, William; Than, Martin; Young, Joanna; Cullen, Louise
2018-02-01
To externally validate a clinical decision rule incorporating heart fatty acid binding protein (h-FABP), high-sensitivity troponin (hs-cTn) and electrocardiogram (ECG) for the detection of acute myocardial infarction (AMI) on presentation to the Emergency Department. We also investigated whether this clinical decision rule improved identification of AMI over algorithms incorporating hs-cTn and ECG only. This study included data from 789 patients from the Brisbane ADAPT cohort and 441 patients from the Christchurch TIMI RCT cohort. The primary outcome was index AMI. Sensitivity, specificity, positive predictive value and negative predictive value were used to assess the diagnostic accuracy of the algorithms. 1230 patients were recruited, including 112 (9.1%) with AMI. The algorithm including h-FABP and hs-cTnT had 100% sensitivity and 32.4% specificity. The algorithm utilising h-FABP and hs-cTnI had similar sensitivity (99.1%) and higher specificity (43.4%). The hs-cTnI and hs-cTnT algorithms without h-FABP both had a sensitivity of 98.2%; a result that was not significantly different from either algorithm incorporating h-FABP. Specificity was higher for the hs-cTnI algorithm (68.1%) compared to the hs-cTnT algorithm (33.0%). The specificity of the algorithm incorporating hs-cTnI alone was also significantly higher than both of the algorithms incorporating h-FABP (p<0.01). For patients presenting to the Emergency Department with chest pain, an algorithm incorporating h-FABP, hs-cTn and ECG has high accuracy and can rule out up to 40% of patients. An algorithm incorporating only hs-cTn and ECG has similar sensitivity and may rule out a higher proportion of patients. Each of the algorithms can be used to safely identify patients as low risk for AMI on presentation to the Emergency Department. Copyright © 2017 The Canadian Society of Clinical Chemists. All rights reserved.
Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video
Lee, Gil-beom; Lee, Myeong-jin; Lee, Woo-Kyung; Park, Joo-heon; Kim, Tae-Hwan
2017-01-01
Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos. PMID:28327515
Obstacle Detection Algorithms for Rotorcraft Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.; Huang, Ying; Narasimhamurthy, Anand; Pande, Nitin; Ahumada, Albert (Technical Monitor)
2001-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.
Falls event detection using triaxial accelerometry and barometric pressure measurement.
Bianchi, Federico; Redmond, Stephen J; Narayanan, Michael R; Cerutti, Sergio; Celler, Branko G; Lovell, Nigel H
2009-01-01
A falls detection system, employing a Bluetooth-based wearable device, containing a triaxial accelerometer and a barometric pressure sensor, is described. The aim of this study is to evaluate the use of barometric pressure measurement, as a surrogate measure of altitude, to augment previously reported accelerometry-based falls detection algorithms. The accelerometry and barometric pressure signals obtained from the waist-mounted device are analyzed by a signal processing and classification algorithm to discriminate falls from activities of daily living. This falls detection algorithm has been compared to two existing algorithms which utilize accelerometry signals alone. A set of laboratory-based simulated falls, along with other tasks associated with activities of daily living (16 tests) were performed by 15 healthy volunteers (9 male and 6 female; age: 23.7 +/- 2.9 years; height: 1.74 +/- 0.11 m). The algorithm incorporating pressure information detected falls with the highest sensitivity (97.8%) and the highest specificity (96.7%).
Comparison of algorithms for automatic border detection of melanoma in dermoscopy images
NASA Astrophysics Data System (ADS)
Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert
2016-09-01
Melanoma is one of the most rapidly accelerating cancers in the world [1]. Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error [3]. Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method [4] by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.
A Clinical Aid for Detecting Skin Cancer: The Triage Amalgamated Dermoscopic Algorithm (TADA).
Rogers, T; Marino, M L; Dusza, S W; Bajaj, S; Usatine, R P; Marchetti, M A; Marghoob, A A
Family physicians (FPs) frequently evaluate skin lesions but may not have the necessary training to accurately and confidently identify lesions that require skin biopsy or specialist referral. We evaluated the diagnostic performance of a new, simplified dermoscopy algorithm for skin cancer detection. In this cross-sectional, observation study, attendees of a dermoscopy course evaluated 50 polarized dermoscopy images of skin lesions (27 malignant and 23 benign) using the Triage Amalgamated Dermoscopic Algorithm (TADA). The dermoscopic criteria of TADA include architectural disorder (ie, disorganized or asymmetric distribution of colors and/or structures), starburst pattern, blue-black or gray color, white structures, negative network, ulcer, and vessels. The study occurred after 1 day of basic dermoscopy training. Clinical information related to palpation (ie, firm, dimpling) was provided when relevant. Of 200 course attendees, 120 (60%) participated in the study. Participants included 64 (53.3%) dermatologists and 41 (34.2%) primary care physicians, 19 (46.3%) of whom were FPs. Fifty-two (43%) individuals had no previous dermoscopy training. Overall, the sensitivity and specificity of TADA for malignant skin lesions was 94.8% and 72.3%, respectively. Previous dermoscopy training and years of dermoscopy experience were not associated with diagnostic sensitivity (P = .13 and P = .05, respectively) or specificity (P = .36 and P = .21, respectively). Specialty type was not associated with sensitivity (P = .37) but dermatologists had a higher specificity than nondermatologists (79% v. 72%, P = .008). After basic instruction, TADA may be a useful dermoscopy algorithm for FPs who examine skin lesions as it has a high sensitivity for detecting skin cancer. © Copyright 2016 by the American Board of Family Medicine.
Wavelet-based automatic determination of the P- and S-wave arrivals
NASA Astrophysics Data System (ADS)
Bogiatzis, P.; Ishii, M.
2013-12-01
The detection of P- and S-wave arrivals is important for a variety of seismological applications including earthquake detection and characterization, and seismic tomography problems such as imaging of hydrocarbon reservoirs. For many years, dedicated human-analysts manually selected the arrival times of P and S waves. However, with the rapid expansion of seismic instrumentation, automatic techniques that can process a large number of seismic traces are becoming essential in tomographic applications, and for earthquake early-warning systems. In this work, we present a pair of algorithms for efficient picking of P and S onset times. The algorithms are based on the continuous wavelet transform of the seismic waveform that allows examination of a signal in both time and frequency domains. Unlike Fourier transform, the basis functions are localized in time and frequency, therefore, wavelet decomposition is suitable for analysis of non-stationary signals. For detecting the P-wave arrival, the wavelet coefficients are calculated using the vertical component of the seismogram, and the onset time of the wave is identified. In the case of the S-wave arrival, we take advantage of the polarization of the shear waves, and cross-examine the wavelet coefficients from the two horizontal components. In addition to the onset times, the automatic picking program provides estimates of uncertainty, which are important for subsequent applications. The algorithms are tested with synthetic data that are generated to include sudden changes in amplitude, frequency, and phase. The performance of the wavelet approach is further evaluated using real data by comparing the automatic picks with manual picks. Our results suggest that the proposed algorithms provide robust measurements that are comparable to manual picks for both P- and S-wave arrivals.
Immunity-Based Aircraft Fault Detection System
NASA Technical Reports Server (NTRS)
Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.
2004-01-01
In the study reported in this paper, we have developed and applied an Artificial Immune System (AIS) algorithm for aircraft fault detection, as an extension to a previous work on intelligent flight control (IFC). Though the prior studies had established the benefits of IFC, one area of weakness that needed to be strengthened was the control dead band induced by commanding a failed surface. Since the IFC approach uses fault accommodation with no detection, the dead band, although it reduces over time due to learning, is present and causes degradation in handling qualities. If the failure can be identified, this dead band can be further A ed to ensure rapid fault accommodation and better handling qualities. The paper describes the application of an immunity-based approach that can detect a broad spectrum of known and unforeseen failures. The approach incorporates the knowledge of the normal operational behavior of the aircraft from sensory data, and probabilistically generates a set of pattern detectors that can detect any abnormalities (including faults) in the behavior pattern indicating unsafe in-flight operation. We developed a tool called MILD (Multi-level Immune Learning Detection) based on a real-valued negative selection algorithm that can generate a small number of specialized detectors (as signatures of known failure conditions) and a larger set of generalized detectors for unknown (or possible) fault conditions. Once the fault is detected and identified, an adaptive control system would use this detection information to stabilize the aircraft by utilizing available resources (control surfaces). We experimented with data sets collected under normal and various simulated failure conditions using a piloted motion-base simulation facility. The reported results are from a collection of test cases that reflect the performance of the proposed immunity-based fault detection algorithm.
NASA Astrophysics Data System (ADS)
Lieb, Florian; Stark, Hans-Georg; Thielemann, Christiane
2017-06-01
Objective. Spike detection from extracellular recordings is a crucial preprocessing step when analyzing neuronal activity. The decision whether a specific part of the signal is a spike or not is important for any kind of other subsequent preprocessing steps, like spike sorting or burst detection in order to reduce the classification of erroneously identified spikes. Many spike detection algorithms have already been suggested, all working reasonably well whenever the signal-to-noise ratio is large enough. When the noise level is high, however, these algorithms have a poor performance. Approach. In this paper we present two new spike detection algorithms. The first is based on a stationary wavelet energy operator and the second is based on the time-frequency representation of spikes. Both algorithms are more reliable than all of the most commonly used methods. Main results. The performance of the algorithms is confirmed by using simulated data, resembling original data recorded from cortical neurons with multielectrode arrays. In order to demonstrate that the performance of the algorithms is not restricted to only one specific set of data, we also verify the performance using a simulated publicly available data set. We show that both proposed algorithms have the best performance under all tested methods, regardless of the signal-to-noise ratio in both data sets. Significance. This contribution will redound to the benefit of electrophysiological investigations of human cells. Especially the spatial and temporal analysis of neural network communications is improved by using the proposed spike detection algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akcakaya, Murat; Nehorai, Arye; Sen, Satyabrata
Most existing radar algorithms are developed under the assumption that the environment (clutter) is stationary. However, in practice, the characteristics of the clutter can vary enormously depending on the radar-operational scenarios. If unaccounted for, these nonstationary variabilities may drastically hinder the radar performance. Therefore, to overcome such shortcomings, we develop a data-driven method for target detection in nonstationary environments. In this method, the radar dynamically detects changes in the environment and adapts to these changes by learning the new statistical characteristics of the environment and by intelligibly updating its statistical detection algorithm. Specifically, we employ drift detection algorithms to detectmore » changes in the environment; incremental learning, particularly learning under concept drift algorithms, to learn the new statistical characteristics of the environment from the new radar data that become available in batches over a period of time. The newly learned environment characteristics are then integrated in the detection algorithm. Furthermore, we use Monte Carlo simulations to demonstrate that the developed method provides a significant improvement in the detection performance compared with detection techniques that are not aware of the environmental changes.« less
NASA Technical Reports Server (NTRS)
Russell, B. Don
1989-01-01
This research concentrated on the application of advanced signal processing, expert system, and digital technologies for the detection and control of low grade, incipient faults on spaceborne power systems. The researchers have considerable experience in the application of advanced digital technologies and the protection of terrestrial power systems. This experience was used in the current contracts to develop new approaches for protecting the electrical distribution system in spaceborne applications. The project was divided into three distinct areas: (1) investigate the applicability of fault detection algorithms developed for terrestrial power systems to the detection of faults in spaceborne systems; (2) investigate the digital hardware and architectures required to monitor and control spaceborne power systems with full capability to implement new detection and diagnostic algorithms; and (3) develop a real-time expert operating system for implementing diagnostic and protection algorithms. Significant progress has been made in each of the above areas. Several terrestrial fault detection algorithms were modified to better adapt to spaceborne power system environments. Several digital architectures were developed and evaluated in light of the fault detection algorithms.
Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters
NASA Astrophysics Data System (ADS)
Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon
2018-04-01
In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.
Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter
NASA Astrophysics Data System (ADS)
Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.
2018-04-01
Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Devadiga, Sadashiva; Tang, Yuan-Liang
1994-01-01
This research was initiated as a part of the Advanced Sensor and Imaging System Technology (ASSIST) program at NASA Langley Research Center. The primary goal of this research is the development of image analysis algorithms for the detection of runways and other objects using an on-board camera. Initial effort was concentrated on images acquired using a passive millimeter wave (PMMW) sensor. The images obtained using PMMW sensors under poor visibility conditions due to atmospheric fog are characterized by very low spatial resolution but good image contrast compared to those images obtained using sensors operating in the visible spectrum. Algorithms developed for analyzing these images using a model of the runway and other objects are described in Part 1 of this report. Experimental verification of these algorithms was limited to a sequence of images simulated from a single frame of PMMW image. Subsequent development and evaluation of algorithms was done using video image sequences. These images have better spatial and temporal resolution compared to PMMW images. Algorithms for reliable recognition of runways and accurate estimation of spatial position of stationary objects on the ground have been developed and evaluated using several image sequences. These algorithms are described in Part 2 of this report. A list of all publications resulting from this work is also included.
Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan
2016-11-01
Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.
A novel community detection method in bipartite networks
NASA Astrophysics Data System (ADS)
Zhou, Cangqi; Feng, Liang; Zhao, Qianchuan
2018-02-01
Community structure is a common and important feature in many complex networks, including bipartite networks, which are used as a standard model for many empirical networks comprised of two types of nodes. In this paper, we propose a two-stage method for detecting community structure in bipartite networks. Firstly, we extend the widely-used Louvain algorithm to bipartite networks. The effectiveness and efficiency of the Louvain algorithm have been proved by many applications. However, there lacks a Louvain-like algorithm specially modified for bipartite networks. Based on bipartite modularity, a measure that extends unipartite modularity and that quantifies the strength of partitions in bipartite networks, we fill the gap by developing the Bi-Louvain algorithm that iteratively groups the nodes in each part by turns. This algorithm in bipartite networks often produces a balanced network structure with equal numbers of two types of nodes. Secondly, for the balanced network yielded by the first algorithm, we use an agglomerative clustering method to further cluster the network. We demonstrate that the calculation of the gain of modularity of each aggregation, and the operation of joining two communities can be compactly calculated by matrix operations for all pairs of communities simultaneously. At last, a complete hierarchical community structure is unfolded. We apply our method to two benchmark data sets and a large-scale data set from an e-commerce company, showing that it effectively identifies community structure in bipartite networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nam, Hyeong Soo; Kim, Chang-Soo; Yoo, Hongki, E-mail: kjwmm@korea.ac.kr, E-mail: hyoo@hanyang.ac.kr
Purpose: Intravascular optical coherence tomography (IV-OCT) is a high-resolution imaging method used to visualize the microstructure of arterial walls in vivo. IV-OCT enables the clinician to clearly observe and accurately measure stent apposition and neointimal coverage of coronary stents, which are associated with side effects such as in-stent thrombosis. In this study, the authors present an algorithm for quantifying stent apposition and neointimal coverage by automatically detecting lumen contours and stent struts in IV-OCT images. Methods: The algorithm utilizes OCT intensity images and their first and second gradient images along the axial direction to detect lumen contours and stent strutmore » candidates. These stent strut candidates are classified into true and false stent struts based on their features, using an artificial neural network with one hidden layer and ten nodes. After segmentation, either the protrusion distance (PD) or neointimal thickness (NT) for each strut is measured automatically. In randomly selected image sets covering a large variety of clinical scenarios, the results of the algorithm were compared to those of manual segmentation by IV-OCT readers. Results: Stent strut detection showed a 96.5% positive predictive value and a 92.9% true positive rate. In addition, case-by-case validation also showed comparable accuracy for most cases. High correlation coefficients (R > 0.99) were observed for PD and NT between the algorithmic and the manual results, showing little bias (0.20 and 0.46 μm, respectively) and a narrow range of limits of agreement (36 and 54 μm, respectively). In addition, the algorithm worked well in various clinical scenarios and even in cases with a low level of stent malapposition and neointimal coverage. Conclusions: The presented automatic algorithm enables robust and fast detection of lumen contours and stent struts and provides quantitative measurements of PD and NT. In addition, the algorithm was validated using various clinical cases to demonstrate its reliability. Therefore, this technique can be effectively utilized for clinical trials on stent-related side effects, including in-stent thrombosis and in-stent restenosis.« less
An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.
Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D
2016-05-01
Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be avoided. Data, extraction algorithms and evaluation routines were released as part of the fecgsyn toolbox on Physionet under an GNU GPL open-source license. This contribution provides a standard framework for benchmarking and regulatory testing of NI-FECG extraction algorithms.
Rodríguez-Canosa, Gonzalo; Giner, Jaime del Cerro; Barrientos, Antonio
2014-01-01
The detection and tracking of mobile objects (DATMO) is progressively gaining importance for security and surveillance applications. This article proposes a set of new algorithms and procedures for detecting and tracking mobile objects by robots that work collaboratively as part of a multirobot system. These surveillance algorithms are conceived of to work with data provided by long distance range sensors and are intended for highly reliable object detection in wide outdoor environments. Contrary to most common approaches, in which detection and tracking are done by an integrated procedure, the approach proposed here relies on a modular structure, in which detection and tracking are carried out independently, and the latter might accept input data from different detection algorithms. Two movement detection algorithms have been developed for the detection of dynamic objects by using both static and/or mobile robots. The solution to the overall problem is based on the use of a Kalman filter to predict the next state of each tracked object. Additionally, new tracking algorithms capable of combining dynamic objects lists coming from either one or various sources complete the solution. The complementary performance of the separated modular structure for detection and identification is evaluated and, finally, a selection of test examples discussed. PMID:24526305
Zhu, Haitao; Nie, Binbin; Liu, Hua; Guo, Hua; Demachi, Kazuyuki; Sekino, Masaki; Shan, Baoci
2016-05-01
Phase map cross-correlation detection and quantification may produce highlighted signal at superparamagnetic iron oxide nanoparticles, and distinguish them from other hypointensities. The method may quantify susceptibility change by performing least squares analysis between a theoretically generated magnetic field template and an experimentally scanned phase image. Because characteristic phase recognition requires the removal of phase wrap and phase background, additional steps of phase unwrapping and filtering may increase the chance of computing error and enlarge the inconsistence among algorithms. To solve problem, phase gradient cross-correlation and quantification method is developed by recognizing characteristic phase gradient pattern instead of phase image because phase gradient operation inherently includes unwrapping and filtering functions. However, few studies have mentioned the detectable limit of currently used phase gradient calculation algorithms. The limit may lead to an underestimation of large magnetic susceptibility change caused by high-concentrated iron accumulation. In this study, mathematical derivation points out the value of maximum detectable phase gradient calculated by differential chain algorithm in both spatial and Fourier domain. To break through the limit, a modified quantification method is proposed by using unwrapped forward differentiation for phase gradient generation. The method enlarges the detectable range of phase gradient measurement and avoids the underestimation of magnetic susceptibility. Simulation and phantom experiments were used to quantitatively compare different methods. In vivo application performs MRI scanning on nude mice implanted by iron-labeled human cancer cells. Results validate the limit of detectable phase gradient and the consequent susceptibility underestimation. Results also demonstrate the advantage of unwrapped forward differentiation compared with differential chain algorithms for susceptibility quantification at high-concentrated iron accumulation. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gendron, Marlin Lee
During Mine Warfare (MIW) operations, MIW analysts perform change detection by visually comparing historical sidescan sonar imagery (SSI) collected by a sidescan sonar with recently collected SSI in an attempt to identify objects (which might be explosive mines) placed at sea since the last time the area was surveyed. This dissertation presents a data structure and three algorithms, developed by the author, that are part of an automated change detection and classification (ACDC) system. MIW analysts at the Naval Oceanographic Office, to reduce the amount of time to perform change detection, are currently using ACDC. The dissertation introductory chapter gives background information on change detection, ACDC, and describes how SSI is produced from raw sonar data. Chapter 2 presents the author's Geospatial Bitmap (GB) data structure, which is capable of storing information geographically and is utilized by the three algorithms. This chapter shows that a GB data structure used in a polygon-smoothing algorithm ran between 1.3--48.4x faster than a sparse matrix data structure. Chapter 3 describes the GB clustering algorithm, which is the author's repeatable, order-independent method for clustering. Results from tests performed in this chapter show that the time to cluster a set of points is not affected by the distribution or the order of the points. In Chapter 4, the author presents his real-time computer-aided detection (CAD) algorithm that automatically detects mine-like objects on the seafloor in SSI. The author ran his GB-based CAD algorithm on real SSI data, and results of these tests indicate that his real-time CAD algorithm performs comparably to or better than other non-real-time CAD algorithms. The author presents his computer-aided search (CAS) algorithm in Chapter 5. CAS helps MIW analysts locate mine-like features that are geospatially close to previously detected features. A comparison between the CAS and a great circle distance algorithm shows that the CAS performs geospatial searching 1.75x faster on large data sets. Finally, the concluding chapter of this dissertation gives important details on how the completed ACDC system will function, and discusses the author's future research to develop additional algorithms and data structures for ACDC.
Kim, Mary S.; Tsutsui, Kenta; Stern, Michael D.; Lakatta, Edward G.; Maltsev, Victor A.
2017-01-01
Local Ca2+ Releases (LCRs) are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA) node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame) sensitivity algorithm applied to each pixel (cell location). An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves), sparks and embers in muscle cells and Ca2+ puffs and syntillas in neurons. PMID:28683095
Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F
2014-03-24
Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.
Wearable physiological sensors and real-time algorithms for detection of acute mountain sickness.
Muza, Stephen R
2018-03-01
This is a minireview of potential wearable physiological sensors and algorithms (process and equations) for detection of acute mountain sickness (AMS). Given the emerging status of this effort, the focus of the review is on the current clinical assessment of AMS, known risk factors (environmental, demographic, and physiological), and current understanding of AMS pathophysiology. Studies that have examined a range of physiological variables to develop AMS prediction and/or detection algorithms are reviewed to provide insight and potential technological roadmaps for future development of real-time physiological sensors and algorithms to detect AMS. Given the lack of signs and nonspecific symptoms associated with AMS, development of wearable physiological sensors and embedded algorithms to predict in the near term or detect established AMS will be challenging. Prior work using [Formula: see text], HR, or HRv has not provided the sensitivity and specificity for useful application to predict or detect AMS. Rather than using spot checks as most prior studies have, wearable systems that continuously measure SpO 2 and HR are commercially available. Employing other statistical modeling approaches such as general linear and logistic mixed models or time series analysis to these continuously measured variables is the most promising approach for developing algorithms that are sensitive and specific for physiological prediction or detection of AMS.
Real time algorithms for sharp wave ripple detection.
Sethi, Ankit; Kemere, Caleb
2014-01-01
Neural activity during sharp wave ripples (SWR), short bursts of co-ordinated oscillatory activity in the CA1 region of the rodent hippocampus, is implicated in a variety of memory functions from consolidation to recall. Detection of these events in an algorithmic framework, has thus far relied on simple thresholding techniques with heuristically derived parameters. This study is an investigation into testing and improving the current methods for detection of SWR events in neural recordings. We propose and profile methods to reduce latency in ripple detection. Proposed algorithms are tested on simulated ripple data. The findings show that simple realtime algorithms can improve upon existing power thresholding methods and can detect ripple activity with latencies in the range of 10-20 ms.
Glint-induced false alarm reduction in signature adaptive target detection
NASA Astrophysics Data System (ADS)
Crosby, Frank J.
2002-07-01
The signal adaptive target detection algorithm developed by Crosby and Riley uses target geometry to discern anomalies in local backgrounds. Detection is not restricted based on specific target signatures. The robustness of the algorithm is limited by an increased false alarm potential. The base algorithm is extended to eliminate one common source of false alarms in a littoral environment. This common source is glint reflected on the surface of water. The spectral and spatial transience of glint prevent straightforward characterization and complicate exclusion. However, the statistical basis of the detection algorithm and its inherent computations allow for glint discernment and the removal of its influence.
Elgendi, Mohamed; Norton, Ian; Brearley, Matt; Abbott, Derek; Schuurmans, Dale
2013-01-01
Photoplethysmogram (PPG) monitoring is not only essential for critically ill patients in hospitals or at home, but also for those undergoing exercise testing. However, processing PPG signals measured after exercise is challenging, especially if the environment is hot and humid. In this paper, we propose a novel algorithm that can detect systolic peaks under challenging conditions, as in the case of emergency responders in tropical conditions. Accurate systolic-peak detection is an important first step for the analysis of heart rate variability. Algorithms based on local maxima-minima, first-derivative, and slope sum are evaluated, and a new algorithm is introduced to improve the detection rate. With 40 healthy subjects, the new algorithm demonstrates the highest overall detection accuracy (99.84% sensitivity, 99.89% positive predictivity). Existing algorithms, such as Billauer's, Li's and Zong's, have comparable although lower accuracy. However, the proposed algorithm presents an advantage for real-time applications by avoiding human intervention in threshold determination. For best performance, we show that a combination of two event-related moving averages with an offset threshold has an advantage in detecting systolic peaks, even in heat-stressed PPG signals.