Sample records for algorithm successfully detected

  1. Online Phase Detection Using Wearable Sensors for Walking with a Robotic Prosthesis

    PubMed Central

    Goršič, Maja; Kamnik, Roman; Ambrožič, Luka; Vitiello, Nicola; Lefeber, Dirk; Pasquini, Guido; Munih, Marko

    2014-01-01

    This paper presents a gait phase detection algorithm for providing feedback in walking with a robotic prosthesis. The algorithm utilizes the output signals of a wearable wireless sensory system incorporating sensorized shoe insoles and inertial measurement units attached to body segments. The principle of detecting transitions between gait phases is based on heuristic threshold rules, dividing a steady-state walking stride into four phases. For the evaluation of the algorithm, experiments with three amputees, walking with the robotic prosthesis and wearable sensors, were performed. Results show a high rate of successful detection for all four phases (the average success rate across all subjects >90%). A comparison of the proposed method to an off-line trained algorithm using hidden Markov models reveals a similar performance achieved without the need for learning dataset acquisition and previous model training. PMID:24521944

  2. Meal Detection in Patients With Type 1 Diabetes: A New Module for the Multivariable Adaptive Artificial Pancreas Control System.

    PubMed

    Turksoy, Kamuran; Samadi, Sediqeh; Feng, Jianyuan; Littlejohn, Elizabeth; Quinn, Laurie; Cinar, Ali

    2016-01-01

    A novel meal-detection algorithm is developed based on continuous glucose measurements. Bergman's minimal model is modified and used in an unscented Kalman filter for state estimations. The estimated rate of appearance of glucose is used for meal detection. Data from nine subjects are used to assess the performance of the algorithm. The results indicate that the proposed algorithm works successfully with high accuracy. The average change in glucose levels between the meals and the detection points is 16(±9.42) [mg/dl] for 61 successfully detected meals and snacks. The algorithm is developed as a new module of an integrated multivariable adaptive artificial pancreas control system. Meal detection with the proposed method is used to administer insulin boluses and prevent most of postprandial hyperglycemia without any manual meal announcements. A novel meal bolus calculation method is proposed and tested with the UVA/Padova simulator. The results indicate significant reduction in hyperglycemia.

  3. The effect of orthology and coregulation on detecting regulatory motifs.

    PubMed

    Storms, Valerie; Claeys, Marleen; Sanchez, Aminael; De Moor, Bart; Verstuyf, Annemieke; Marchal, Kathleen

    2010-02-03

    Computational de novo discovery of transcription factor binding sites is still a challenging problem. The growing number of sequenced genomes allows integrating orthology evidence with coregulation information when searching for motifs. Moreover, the more advanced motif detection algorithms explicitly model the phylogenetic relatedness between the orthologous input sequences and thus should be well adapted towards using orthologous information. In this study, we evaluated the conditions under which complementing coregulation with orthologous information improves motif detection for the class of probabilistic motif detection algorithms with an explicit evolutionary model. We designed datasets (real and synthetic) covering different degrees of coregulation and orthologous information to test how well Phylogibbs and Phylogenetic sampler, as representatives of the motif detection algorithms with evolutionary model performed as compared to MEME, a more classical motif detection algorithm that treats orthologs independently. Under certain conditions detecting motifs in the combined coregulation-orthology space is indeed more efficient than using each space separately, but this is not always the case. Moreover, the difference in success rate between the advanced algorithms and MEME is still marginal. The success rate of motif detection depends on the complex interplay between the added information and the specificities of the applied algorithms. Insights in this relation provide information useful to both developers and users. All benchmark datasets are available at http://homes.esat.kuleuven.be/~kmarchal/Supplementary_Storms_Valerie_PlosONE.

  4. The Effect of Orthology and Coregulation on Detecting Regulatory Motifs

    PubMed Central

    Storms, Valerie; Claeys, Marleen; Sanchez, Aminael; De Moor, Bart; Verstuyf, Annemieke; Marchal, Kathleen

    2010-01-01

    Background Computational de novo discovery of transcription factor binding sites is still a challenging problem. The growing number of sequenced genomes allows integrating orthology evidence with coregulation information when searching for motifs. Moreover, the more advanced motif detection algorithms explicitly model the phylogenetic relatedness between the orthologous input sequences and thus should be well adapted towards using orthologous information. In this study, we evaluated the conditions under which complementing coregulation with orthologous information improves motif detection for the class of probabilistic motif detection algorithms with an explicit evolutionary model. Methodology We designed datasets (real and synthetic) covering different degrees of coregulation and orthologous information to test how well Phylogibbs and Phylogenetic sampler, as representatives of the motif detection algorithms with evolutionary model performed as compared to MEME, a more classical motif detection algorithm that treats orthologs independently. Results and Conclusions Under certain conditions detecting motifs in the combined coregulation-orthology space is indeed more efficient than using each space separately, but this is not always the case. Moreover, the difference in success rate between the advanced algorithms and MEME is still marginal. The success rate of motif detection depends on the complex interplay between the added information and the specificities of the applied algorithms. Insights in this relation provide information useful to both developers and users. All benchmark datasets are available at http://homes.esat.kuleuven.be/~kmarchal/Supplementary_Storms_Valerie_PlosONE. PMID:20140085

  5. Analysis of modal behavior at frequency cross-over

    NASA Astrophysics Data System (ADS)

    Costa, Robert N., Jr.

    1994-11-01

    The existence of the mode crossing condition is detected and analyzed in the Active Control of Space Structures Model 4 (ACOSS4). The condition is studied for its contribution to the inability of previous algorithms to successfully optimize the structure and converge to a feasible solution. A new algorithm is developed to detect and correct for mode crossings. The existence of the mode crossing condition is verified in ACOSS4 and found not to have appreciably affected the solution. The structure is then successfully optimized using new analytic methods based on modal expansion. An unrelated error in the optimization algorithm previously used is verified and corrected, thereby equipping the optimization algorithm with a second analytic method for eigenvector differentiation based on Nelson's Method. The second structure is the Control of Flexible Structures (COFS). The COFS structure is successfully reproduced and an initial eigenanalysis completed.

  6. Performance analysis of robust road sign identification

    NASA Astrophysics Data System (ADS)

    Ali, Nursabillilah M.; Mustafah, Y. M.; Rashid, N. K. A. M.

    2013-12-01

    This study describes performance analysis of a robust system for road sign identification that incorporated two stages of different algorithms. The proposed algorithms consist of HSV color filtering and PCA techniques respectively in detection and recognition stages. The proposed algorithms are able to detect the three standard types of colored images namely Red, Yellow and Blue. The hypothesis of the study is that road sign images can be used to detect and identify signs that are involved with the existence of occlusions and rotational changes. PCA is known as feature extraction technique that reduces dimensional size. The sign image can be easily recognized and identified by the PCA method as is has been used in many application areas. Based on the experimental result, it shows that the HSV is robust in road sign detection with minimum of 88% and 77% successful rate for non-partial and partial occlusions images. For successful recognition rates using PCA can be achieved in the range of 94-98%. The occurrences of all classes are recognized successfully is between 5% and 10% level of occlusions.

  7. Vision-based algorithms for near-host object detection and multilane sensing

    NASA Astrophysics Data System (ADS)

    Kenue, Surender K.

    1995-01-01

    Vision-based sensing can be used for lane sensing, adaptive cruise control, collision warning, and driver performance monitoring functions of intelligent vehicles. Current computer vision algorithms are not robust for handling multiple vehicles in highway scenarios. Several new algorithms are proposed for multi-lane sensing, near-host object detection, vehicle cut-in situations, and specifying regions of interest for object tracking. These algorithms were tested successfully on more than 6000 images taken from real-highway scenes under different daytime lighting conditions.

  8. Clustering analysis of moving target signatures

    NASA Astrophysics Data System (ADS)

    Martone, Anthony; Ranney, Kenneth; Innocenti, Roberto

    2010-04-01

    Previously, we developed a moving target indication (MTI) processing approach to detect and track slow-moving targets inside buildings, which successfully detected moving targets (MTs) from data collected by a low-frequency, ultra-wideband radar. Our MTI algorithms include change detection, automatic target detection (ATD), clustering, and tracking. The MTI algorithms can be implemented in a real-time or near-real-time system; however, a person-in-the-loop is needed to select input parameters for the clustering algorithm. Specifically, the number of clusters to input into the cluster algorithm is unknown and requires manual selection. A critical need exists to automate all aspects of the MTI processing formulation. In this paper, we investigate two techniques that automatically determine the number of clusters: the adaptive knee-point (KP) algorithm and the recursive pixel finding (RPF) algorithm. The KP algorithm is based on a well-known heuristic approach for determining the number of clusters. The RPF algorithm is analogous to the image processing, pixel labeling procedure. Both algorithms are used to analyze the false alarm and detection rates of three operational scenarios of personnel walking inside wood and cinderblock buildings.

  9. Fetal heart rate deceleration detection using a discrete cosine transform implementation of singular spectrum analysis.

    PubMed

    Warrick, P A; Precup, D; Hamilton, E F; Kearney, R E

    2007-01-01

    To develop a singular-spectrum analysis (SSA) based change-point detection algorithm applicable to fetal heart rate (FHR) monitoring to improve the detection of deceleration events. We present a method for decomposing a signal into near-orthogonal components via the discrete cosine transform (DCT) and apply this in a novel online manner to change-point detection based on SSA. The SSA technique forms models of the underlying signal that can be compared over time; models that are sufficiently different indicate signal change points. To adapt the algorithm to deceleration detection where many successive similar change events can occur, we modify the standard SSA algorithm to hold the reference model constant under such conditions, an approach that we term "base-hold SSA". The algorithm is applied to a database of 15 FHR tracings that have been preprocessed to locate candidate decelerations and is compared to the markings of an expert obstetrician. Of the 528 true and 1285 false decelerations presented to the algorithm, the base-hold approach improved on standard SSA, reducing the number of missed decelerations from 64 to 49 (21.9%) while maintaining the same reduction in false-positives (278). The standard SSA assumption that changes are infrequent does not apply to FHR analysis where decelerations can occur successively and in close proximity; our base-hold SSA modification improves detection of these types of event series.

  10. Detecting communities in large networks

    NASA Astrophysics Data System (ADS)

    Capocci, A.; Servedio, V. D. P.; Caldarelli, G.; Colaiori, F.

    2005-07-01

    We develop an algorithm to detect community structure in complex networks. The algorithm is based on spectral methods and takes into account weights and link orientation. Since the method detects efficiently clustered nodes in large networks even when these are not sharply partitioned, it turns to be specially suitable for the analysis of social and information networks. We test the algorithm on a large-scale data-set from a psychological experiment of word association. In this case, it proves to be successful both in clustering words, and in uncovering mental association patterns.

  11. Redundancy management of multiple KT-70 inertial measurement units applicable to the space shuttle

    NASA Technical Reports Server (NTRS)

    Cook, L. J.

    1975-01-01

    Results of an investigation of velocity failure detection and isolation for 3 inertial measuring units (IMU) and 2 inertial measuring units (IMU) configurations are presented. The failure detection and isolation algorithm performance was highly successful and most types of velocity errors were detected and isolated. The failure detection and isolation algorithm also included attitude FDI but was not evaluated because of the lack of time and low resolution in the gimbal angle synchro outputs. The shuttle KT-70 IMUs will have dual-speed resolvers and high resolution gimbal angle readouts. It was demonstrated by these tests that a single computer utilizing a serial data bus can successfully control a redundant 3-IMU system and perform FDI.

  12. Using laser altimetry-based segmentation to refine automated tree identification in managed forests of the Black Hills, South Dakota

    Treesearch

    Eric Rowell; Carl Selelstad; Lee Vierling; Lloyd Queen; Wayne Sheppard

    2006-01-01

    The success of a local maximum (LM) tree detection algorithm for detecting individual trees from lidar data depends on stand conditions that are often highly variable. A laser height variance and percent canopy cover (PCC) classification is used to segment the landscape by stand condition prior to stem detection. We test the performance of the LM algorithm using canopy...

  13. Developing operation algorithms for vision subsystems in autonomous mobile robots

    NASA Astrophysics Data System (ADS)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  14. Multi-Complementary Model for Long-Term Tracking

    PubMed Central

    Zhang, Deng; Zhang, Junchang; Xia, Chenyang

    2018-01-01

    In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170

  15. A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data.

    PubMed

    Gregoire, John M; Dale, Darren; van Dover, R Bruce

    2011-01-01

    Peak detection is ubiquitous in the analysis of spectral data. While many noise-filtering algorithms and peak identification algorithms have been developed, recent work [P. Du, W. Kibbe, and S. Lin, Bioinformatics 22, 2059 (2006); A. Wee, D. Grayden, Y. Zhu, K. Petkovic-Duran, and D. Smith, Electrophoresis 29, 4215 (2008)] has demonstrated that both of these tasks are efficiently performed through analysis of the wavelet transform of the data. In this paper, we present a wavelet-based peak detection algorithm with user-defined parameters that can be readily applied to the application of any spectral data. Particular attention is given to the algorithm's resolution of overlapping peaks. The algorithm is implemented for the analysis of powder diffraction data, and successful detection of Bragg peaks is demonstrated for both low signal-to-noise data from theta-theta diffraction of nanoparticles and combinatorial x-ray diffraction data from a composition spread thin film. These datasets have different types of background signals which are effectively removed in the wavelet-based method, and the results demonstrate that the algorithm provides a robust method for automated peak detection.

  16. Removal of impulse noise clusters from color images with local order statistics

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2017-09-01

    This paper proposes a novel algorithm for restoring images corrupted with clusters of impulse noise. The noise clusters often occur when the probability of impulse noise is very high. The proposed noise removal algorithm consists of detection of bulky impulse noise in three color channels with local order statistics followed by removal of the detected clusters by means of vector median filtering. With the help of computer simulation we show that the proposed algorithm is able to effectively remove clustered impulse noise. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.

  17. Using medication list--problem list mismatches as markers of potential error.

    PubMed Central

    Carpenter, James D.; Gorman, Paul N.

    2002-01-01

    The goal of this project was to specify and develop an algorithm that will check for drug and problem list mismatches in an electronic medical record (EMR). The algorithm is based on the premise that a patient's problem list and medication list should agree, and a mismatch may indicate medication error. Successful development of this algorithm could mean detection of some errors, such as medication orders entered into a wrong patient record, or drug therapy omissions, that are not otherwise detected via automated means. Additionally, mismatches may identify opportunities to improve problem list integrity. To assess the concept's feasibility, this study compared medications listed in a pharmacy information system with findings in an online nursing adult admission assessment, serving as a proxy for the problem list. Where drug and problem list mismatches were discovered, examination of the patient record confirmed the mismatch, and identified any potential causes. Evaluation of the algorithm in diabetes treatment indicates that it successfully detects both potential medication error and opportunities to improve problem list completeness. This algorithm, once fully developed and deployed, could prove a valuable way to improve the patient problem list, and could decrease the risk of medication error. PMID:12463796

  18. A new real-time tsunami detection algorithm

    NASA Astrophysics Data System (ADS)

    Chierici, F.; Embriaco, D.; Pignagnoli, L.

    2016-12-01

    Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection based on the real-time tide removal and real-time band-pass filtering of sea-bed pressure recordings. The algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability, at low computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. Pressure data sets acquired by Bottom Pressure Recorders in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event which occurred at Haida Gwaii on October 28th, 2012 using data recorded by the Bullseye underwater node of Ocean Networks Canada. The algorithm successfully ran for test purpose in year-long missions onboard the GEOSTAR stand-alone multidisciplinary abyssal observatory, deployed in the Gulf of Cadiz during the EC project NEAREST and on NEMO-SN1 cabled observatory deployed in the Western Ionian Sea, operational node of the European research infrastructure EMSO.

  19. A real time microcomputer implementation of sensor failure detection for turbofan engines

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1989-01-01

    An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.

  20. Impulsive noise removal from color video with morphological filtering

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2017-09-01

    This paper deals with impulse noise removal from color video. The proposed noise removal algorithm employs a switching filtering for denoising of color video; that is, detection of corrupted pixels by means of a novel morphological filtering followed by removal of the detected pixels on the base of estimation of uncorrupted pixels in the previous scenes. With the help of computer simulation we show that the proposed algorithm is able to well remove impulse noise in color video. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.

  1. Fast internal marker tracking algorithm for onboard MV and kV imaging systems

    PubMed Central

    Mao, W.; Wiersma, R. D.; Xing, L.

    2008-01-01

    Intrafraction organ motion can limit the advantage of highly conformal dose techniques such as intensity modulated radiation therapy (IMRT) due to target position uncertainty. To ensure high accuracy in beam targeting, real-time knowledge of the target location is highly desired throughout the beam delivery process. This knowledge can be gained through imaging of internally implanted radio-opaque markers with fluoroscopic or electronic portal imaging devices (EPID). In the case of MV based images, marker detection can be problematic due to the significantly lower contrast between different materials in comparison to their kV-based counterparts. This work presents a fully automated algorithm capable of detecting implanted metallic markers in both kV and MV images with high consistency. Using prior CT information, the algorithm predefines the volumetric search space without manual region-of-interest (ROI) selection by the user. Depending on the template selected, both spherical and cylindrical markers can be detected. Multiple markers can be simultaneously tracked without indexing confusion. Phantom studies show detection success rates of 100% for both kV and MV image data. In addition, application of the algorithm to real patient image data results in successful detection of all implanted markers for MV images. Near real-time operational speeds of ∼10 frames∕sec for the detection of five markers in a 1024×768 image are accomplished using an ordinary PC workstation. PMID:18561670

  2. Automated detection of Pi 2 pulsations using wavelet analysis: 1. Method and an application for substorm monitoring

    USGS Publications Warehouse

    Nose, M.; Iyemori, T.; Takeda, M.; Kamei, T.; Milling, D.K.; Orr, D.; Singer, H.J.; Worthington, E.W.; Sumitomo, N.

    1998-01-01

    Wavelet analysis is suitable for investigating waves, such as Pi 2 pulsations, which are limited in both time and frequency. We have developed an algorithm to detect Pi 2 pulsations by wavelet analysis. We tested the algorithm and found that the results of Pi 2 detection are consistent with those obtained by visual inspection. The algorithm is applied in a project which aims at the nowcasting of substorm onsets. In this project we use real-time geomagnetic field data, with a sampling rate of 1 second, obtained at mid- and low-latitude stations (Mineyama in Japan, the York SAMNET station in the U.K., and Boulder in the U.S.). These stations are each separated by about 120??in longitude, so at least one station is on the nightside at all times. We plan to analyze the real-time data at each station using the Pi 2 detection algorithm, and to exchange the detection results among these stations via the Internet. Therefore we can obtain information about substorm onsets in real-time, even if we are on the dayside. We have constructed a system to detect Pi 2 pulsations automatically at Mineyama observatory. The detection results for the period of February to August 1996 showed that the rate of successful detection of Pi 2 pulsations was 83.4% for the nightside (18-06MLT) and 26.5% for the dayside (06-18MLT). The detection results near local midnight (20-02MLT) give the rate of successful detection of 93.2%.

  3. Selection of experimental modal data sets for damage detection via model update

    NASA Technical Reports Server (NTRS)

    Doebling, S. W.; Hemez, F. M.; Barlow, M. S.; Peterson, L. D.; Farhat, C.

    1993-01-01

    When using a finite element model update algorithm for detecting damage in structures, it is important that the experimental modal data sets used in the update be selected in a coherent manner. In the case of a structure with extremely localized modal behavior, it is necessary to use both low and high frequency modes, but many of the modes in between may be excluded. In this paper, we examine two different mode selection strategies based on modal strain energy, and compare their success to the choice of an equal number of modes based merely on lowest frequency. Additionally, some parameters are introduced to enable a quantitative assessment of the success of our damage detection algorithm when using the various set selection criteria.

  4. Acoustic Longitudinal Field NIF Optic Feature Detection Map Using Time-Reversal & MUSIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S K

    2006-02-09

    We developed an ultrasonic longitudinal field time-reversal and MUltiple SIgnal Classification (MUSIC) based detection algorithm for identifying and mapping flaws in fused silica NIF optics. The algorithm requires a fully multistatic data set, that is one with multiple, independently operated, spatially diverse transducers, each transmitter of which, in succession, launches a pulse into the optic and the scattered signal measured and recorded at every receiver. We have successfully localized engineered ''defects'' larger than 1 mm in an optic. We confirmed detection and localization of 3 mm and 5 mm features in experimental data, and a 0.5 mm in simulated datamore » with sufficiently high signal-to-noise ratio. We present the theory, experimental results, and simulated results.« less

  5. Automatic laser beam alignment using blob detection for an environment monitoring spectroscopy

    NASA Astrophysics Data System (ADS)

    Khidir, Jarjees; Chen, Youhua; Anderson, Gary

    2013-05-01

    This paper describes a fully automated system to align an infra-red laser beam with a small retro-reflector over a wide range of distances. The component development and test were especially used for an open-path spectrometer gas detection system. Using blob detection under OpenCV library, an automatic alignment algorithm was designed to achieve fast and accurate target detection in a complex background environment. Test results are presented to show that the proposed algorithm has been successfully applied to various target distances and environment conditions.

  6. Detecting and visualizing weak signatures in hyperspectral data

    NASA Astrophysics Data System (ADS)

    MacPherson, Duncan James

    This thesis evaluates existing techniques for detecting weak spectral signatures from remotely sensed hyperspectral data. Algorithms are presented that successfully detect hard-to-find 'mystery' signatures in unknown cluttered backgrounds. The term 'mystery' is used to describe a scenario where the spectral target and background endmembers are unknown. Sub-Pixel analysis and background suppression are used to find deeply embedded signatures which can be less than 10% of the total signal strength. Existing 'mystery target' detection algorithms are derived and compared. Several techniques are shown to be superior both visually and quantitatively. Detection performance is evaluated using confidence metrics that are developed. A multiple algorithm approach is shown to improve detection confidence significantly. Although the research focuses on remote sensing applications, the algorithms presented can be applied to a wide variety of diverse fields such as medicine, law enforcement, manufacturing, earth science, food production, and astrophysics. The algorithms are shown to be general and can be applied to both the reflective and emissive parts of the electromagnetic spectrum. The application scope is a broad one and the final results open new opportunities for many specific applications including: land mine detection, pollution and hazardous waste detection, crop abundance calculations, volcanic activity monitoring, detecting diseases in food, automobile or airplane target recognition, cancer detection, mining operations, extracting galactic gas emissions, etc.

  7. Automated detection of a prostate Ni-Ti stent in electronic portal images.

    PubMed

    Carl, Jesper; Nielsen, Henning; Nielsen, Jane; Lund, Bente; Larsen, Erik Hoejkjaer

    2006-12-01

    Planning target volumes (PTV) in fractionated radiotherapy still have to be outlined with wide margins to the clinical target volume due to uncertainties arising from daily shift of the prostate position. A recently proposed new method of visualization of the prostate is based on insertion of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection algorithm. The automated method uses enhancement of lines combined with a grayscale morphology operation that looks for enhanced pixels separated with a distance similar to the diameter of the stent. The images in this study are all from prostate cancer patients treated with radiotherapy in a previous study. Images of a stent inserted in a humanoid phantom demonstrated a localization accuracy of 0.4-0.7 mm which equals the pixel size in the image. The automated detection of the stent was compared to manual detection in 71 pairs of orthogonal images taken in nine patients. The algorithm was successful in 67 of 71 pairs of images. The method is fast, has a high success rate, good accuracy, and has a potential for unsupervised localization of the prostate before radiotherapy, which would enable automated repositioning before treatment and allow for the use of very tight PTV margins.

  8. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.

    PubMed

    Amudha, P; Karthik, S; Sivakumari, S

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.

  9. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    PubMed Central

    Amudha, P.; Karthik, S.; Sivakumari, S.

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  10. Clustering and Candidate Motif Detection in Exosomal miRNAs by Application of Machine Learning Algorithms.

    PubMed

    Gaur, Pallavi; Chaturvedi, Anoop

    2017-07-22

    The clustering pattern and motifs give immense information about any biological data. An application of machine learning algorithms for clustering and candidate motif detection in miRNAs derived from exosomes is depicted in this paper. Recent progress in the field of exosome research and more particularly regarding exosomal miRNAs has led much bioinformatic-based research to come into existence. The information on clustering pattern and candidate motifs in miRNAs of exosomal origin would help in analyzing existing, as well as newly discovered miRNAs within exosomes. Along with obtaining clustering pattern and candidate motifs in exosomal miRNAs, this work also elaborates the usefulness of the machine learning algorithms that can be efficiently used and executed on various programming languages/platforms. Data were clustered and sequence candidate motifs were detected successfully. The results were compared and validated with some available web tools such as 'BLASTN' and 'MEME suite'. The machine learning algorithms for aforementioned objectives were applied successfully. This work elaborated utility of machine learning algorithms and language platforms to achieve the tasks of clustering and candidate motif detection in exosomal miRNAs. With the information on mentioned objectives, deeper insight would be gained for analyses of newly discovered miRNAs in exosomes which are considered to be circulating biomarkers. In addition, the execution of machine learning algorithms on various language platforms gives more flexibility to users to try multiple iterations according to their requirements. This approach can be applied to other biological data-mining tasks as well.

  11. Detecting chaos in irregularly sampled time series.

    PubMed

    Kulp, C W

    2013-09-01

    Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.

  12. Foliage penetration by using 4-D point cloud data

    NASA Astrophysics Data System (ADS)

    Méndez Rodríguez, Javier; Sánchez-Reyes, Pedro J.; Cruz-Rivera, Sol M.

    2012-06-01

    Real-time awareness and rapid target detection are critical for the success of military missions. New technologies capable of detecting targets concealed in forest areas are needed in order to track and identify possible threats. Currently, LAser Detection And Ranging (LADAR) systems are capable of detecting obscured targets; however, tracking capabilities are severely limited. Now, a new LADAR-derived technology is under development to generate 4-D datasets (3-D video in a point cloud format). As such, there is a new need for algorithms that are able to process data in real time. We propose an algorithm capable of removing vegetation and other objects that may obfuscate concealed targets in a real 3-D environment. The algorithm is based on wavelets and can be used as a pre-processing step in a target recognition algorithm. Applications of the algorithm in a real-time 3-D system could help make pilots aware of high risk hidden targets such as tanks and weapons, among others. We will be using a 4-D simulated point cloud data to demonstrate the capabilities of our algorithm.

  13. Towards a global flood detection system using social media

    NASA Astrophysics Data System (ADS)

    de Bruijn, Jens; de Moel, Hans; Jongman, Brenden; Aerts, Jeroen

    2017-04-01

    It is widely recognized that an early warning is critical in improving international disaster response. Analysis of social media in real-time can provide valuable information about an event or help to detect unexpected events. For successful and reliable detection systems that work globally, it is important that sufficient data is available and that the algorithm works both in data-rich and data-poor environments. In this study, both a new geotagging system and multi-level event detection system for flood hazards was developed using Twitter data. Geotagging algorithms that regard one tweet as a single document are well-studied. However, no algorithms exist that combine several sequential tweets mentioning keywords regarding a specific event type. Within the time frame of an event, multiple users use event related keywords that refer to the same place name. This notion allows us to treat several sequential tweets posted in the last 24 hours as one document. For all these tweets, we collect a series of spatial indicators given in the tweet metadata and extract additional topological indicators from the text. Using these indicators, we can reduce ambiguity and thus better estimate what locations are tweeted about. Using these localized tweets, Bayesian change-point analysis is used to find significant increases of tweets mentioning countries, provinces or towns. In data-poor environments detection of events on a country level is possible, while in other, data-rich, environments detection on a city level is achieved. Additionally, on a city-level we analyse the spatial dependence of mentioned places. If multiple places within a limited spatial extent are mentioned, detection confidence increases. We run the algorithm using 2 years of Twitter data with flood related keywords in 13 major languages and validate against a flood event database. We find that the geotagging algorithm yields significantly more data than previously developed algorithms and successfully deals with ambiguous place names. In addition, we show that our detection system can both quickly and reliably detect floods, even in countries where data is scarce, while achieving high detail in countries where more data is available.

  14. Color object detection using spatial-color joint probability functions.

    PubMed

    Luo, Jiebo; Crandall, David

    2006-06-01

    Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large number of exemplars (for rigid objects) or a large amount of human intuition (for nonrigid objects) to develop a robust algorithm. We present a robust algorithm designed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color joint probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope (i.e., size and location) for the object. Experimental results demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely related algorithm based on color co-occurrence histograms by a decisive margin.

  15. Detection of Nitrogen Content in Rubber Leaves Using Near-Infrared (NIR) Spectroscopy with Correlation-Based Successive Projections Algorithm (SPA).

    PubMed

    Tang, Rongnian; Chen, Xupeng; Li, Chuang

    2018-05-01

    Near-infrared spectroscopy is an efficient, low-cost technology that has potential as an accurate method in detecting the nitrogen content of natural rubber leaves. Successive projections algorithm (SPA) is a widely used variable selection method for multivariate calibration, which uses projection operations to select a variable subset with minimum multi-collinearity. However, due to the fluctuation of correlation between variables, high collinearity may still exist in non-adjacent variables of subset obtained by basic SPA. Based on analysis to the correlation matrix of the spectra data, this paper proposed a correlation-based SPA (CB-SPA) to apply the successive projections algorithm in regions with consistent correlation. The result shows that CB-SPA can select variable subsets with more valuable variables and less multi-collinearity. Meanwhile, models established by the CB-SPA subset outperform basic SPA subsets in predicting nitrogen content in terms of both cross-validation and external prediction. Moreover, CB-SPA is assured to be more efficient, for the time cost in its selection procedure is one-twelfth that of the basic SPA.

  16. Object Detection Based on Template Matching through Use of Best-So-Far ABC

    PubMed Central

    2014-01-01

    Best-so-far ABC is a modified version of the artificial bee colony (ABC) algorithm used for optimization tasks. This algorithm is one of the swarm intelligence (SI) algorithms proposed in recent literature, in which the results demonstrated that the best-so-far ABC can produce higher quality solutions with faster convergence than either the ordinary ABC or the current state-of-the-art ABC-based algorithm. In this work, we aim to apply the best-so-far ABC-based approach for object detection based on template matching by using the difference between the RGB level histograms corresponding to the target object and the template object as the objective function. Results confirm that the proposed method was successful in both detecting objects and optimizing the time used to reach the solution. PMID:24812556

  17. Algorithms for Autonomous Plume Detection on Outer Planet Satellites

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Bunte, M. K.; Saripalli, S.; Greeley, R.

    2011-12-01

    We investigate techniques for automated detection of geophysical events (i.e., volcanic plumes) from spacecraft images. The algorithms presented here have not been previously applied to detection of transient events on outer planet satellites. We apply Scale Invariant Feature Transform (SIFT) to raw images of Io and Enceladus from the Voyager, Galileo, Cassini, and New Horizons missions. SIFT produces distinct interest points in every image; feature descriptors are reasonably invariant to changes in illumination, image noise, rotation, scaling, and small changes in viewpoint. We classified these descriptors as plumes using the k-nearest neighbor (KNN) algorithm. In KNN, an object is classified by its similarity to examples in a training set of images based on user defined thresholds. Using the complete database of Io images and a selection of Enceladus images where 1-3 plumes were manually detected in each image, we successfully detected 74% of plumes in Galileo and New Horizons images, 95% in Voyager images, and 93% in Cassini images. Preliminary tests yielded some false positive detections; further iterations will improve performance. In images where detections fail, plumes are less than 9 pixels in size or are lost in image glare. We compared the appearance of plumes and illuminated mountain slopes to determine the potential for feature classification. We successfully differentiated features. An advantage over other methods is the ability to detect plumes in non-limb views where they appear in the shadowed part of the surface; improvements will enable detection against the illuminated background surface where gradient changes would otherwise preclude detection. This detection method has potential applications to future outer planet missions for sustained plume monitoring campaigns and onboard automated prioritization of all spacecraft data. The complementary nature of this method is such that it could be used in conjunction with edge detection algorithms to increase effectiveness. We have demonstrated an ability to detect transient events above the planetary limb and on the surface and to distinguish feature classes in spacecraft images.

  18. Detection of Gait Modes Using an Artificial Neural Network during Walking with a Powered Ankle-Foot Orthosis

    PubMed Central

    2016-01-01

    This paper presents an algorithm, for use with a Portable Powered Ankle-Foot Orthosis (i.e., PPAFO) that can automatically detect changes in gait modes (level ground, ascent and descent of stairs or ramps), thus allowing for appropriate ankle actuation control during swing phase. An artificial neural network (ANN) algorithm used input signals from an inertial measurement unit and foot switches, that is, vertical velocity and segment angle of the foot. Output from the ANN was filtered and adjusted to generate a final data set used to classify different gait modes. Five healthy male subjects walked with the PPAFO on the right leg for two test scenarios (walking over level ground and up and down stairs or a ramp; three trials per scenario). Success rate was quantified by the number of correctly classified steps with respect to the total number of steps. The results indicated that the proposed algorithm's success rate was high (99.3%, 100%, and 98.3% for level, ascent, and descent modes in the stairs scenario, respectively; 98.9%, 97.8%, and 100% in the ramp scenario). The proposed algorithm continuously detected each step's gait mode with faster timing and higher accuracy compared to a previous algorithm that used a decision tree based on maximizing the reliability of the mode recognition. PMID:28070188

  19. Identification of P/S-wave successions for application in microseismicity

    NASA Astrophysics Data System (ADS)

    Deflandre, J.-P.; Dubesset, M.

    1992-09-01

    Interpretation of P/S-wave successions is used in induced or passive microseismicity. It makes the location of microseismic events possible when the triangulation technique cannot be used. To improve the reliability of the method, we propose a technique that identifies the P/S-wave successions among recorded wave successions. A polarization software is used to verify the orthogonality between the P and S polarization axes. The polarization parameters are computed all along the 3-component acoustic signal. Then the algorithm detects time windows within which the signal polarization axis is perpendicular to the polarization axis of the wave in the reference time window (representative of the P wave). The technique is demonstrated for a synthetic event, and three application cases are presented. The first one corresponds to a calibration shot within which the arrivals of perpendicularly polarized waves are correctly detected in spite of their moderate amplitude. The second example presents a microseismic event recorded during gas withdrawal from an underground gas storage reservoir. The last example is chosen as a counter-example, concerning a microseismic event recorded during a hydraulic fracturing job. The detection algorithm reveals that, in this case, the wave succession does not correspond to a P/S one. This implies that such an event must not be located by the method based on the interpretation of a P/S-wave succession as no such a succession is confirmed.

  20. Feasibility Study of a Generalized Framework for Developing Computer-Aided Detection Systems-a New Paradigm.

    PubMed

    Nemoto, Mitsutaka; Hayashi, Naoto; Hanaoka, Shouhei; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu

    2017-10-01

    We propose a generalized framework for developing computer-aided detection (CADe) systems whose characteristics depend only on those of the training dataset. The purpose of this study is to show the feasibility of the framework. Two different CADe systems were experimentally developed by a prototype of the framework, but with different training datasets. The CADe systems include four components; preprocessing, candidate area extraction, candidate detection, and candidate classification. Four pretrained algorithms with dedicated optimization/setting methods corresponding to the respective components were prepared in advance. The pretrained algorithms were sequentially trained in the order of processing of the components. In this study, two different datasets, brain MRA with cerebral aneurysms and chest CT with lung nodules, were collected to develop two different types of CADe systems in the framework. The performances of the developed CADe systems were evaluated by threefold cross-validation. The CADe systems for detecting cerebral aneurysms in brain MRAs and for detecting lung nodules in chest CTs were successfully developed using the respective datasets. The framework was shown to be feasible by the successful development of the two different types of CADe systems. The feasibility of this framework shows promise for a new paradigm in the development of CADe systems: development of CADe systems without any lesion specific algorithm designing.

  1. A Diagnostic Approach for Electro-Mechanical Actuators in Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai Frank; Stoelting, Paul; Curran, Simon

    2009-01-01

    Electro-mechanical actuators (EMA) are finding increasing use in aerospace applications, especially with the trend towards all all-electric aircraft and spacecraft designs. However, electro-mechanical actuators still lack the knowledge base accumulated for other fielded actuator types, particularly with regard to fault detection and characterization. This paper presents a thorough analysis of some of the critical failure modes documented for EMAs and describes experiments conducted on detecting and isolating a subset of them. The list of failures has been prepared through an extensive Failure Modes and Criticality Analysis (FMECA) reference, literature review, and accessible industry experience. Methods for data acquisition and validation of algorithms on EMA test stands are described. A variety of condition indicators were developed that enabled detection, identification, and isolation among the various fault modes. A diagnostic algorithm based on an artificial neural network is shown to operate successfully using these condition indicators and furthermore, robustness of these diagnostic routines to sensor faults is demonstrated by showing their ability to distinguish between them and component failures. The paper concludes with a roadmap leading from this effort towards developing successful prognostic algorithms for electromechanical actuators.

  2. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    PubMed

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Active Semi-Supervised Community Detection Based on Must-Link and Cannot-Link Constraints

    PubMed Central

    Cheng, Jianjun; Leng, Mingwei; Li, Longjie; Zhou, Hanhai; Chen, Xiaoyun

    2014-01-01

    Community structure detection is of great importance because it can help in discovering the relationship between the function and the topology structure of a network. Many community detection algorithms have been proposed, but how to incorporate the prior knowledge in the detection process remains a challenging problem. In this paper, we propose a semi-supervised community detection algorithm, which makes full utilization of the must-link and cannot-link constraints to guide the process of community detection and thereby extracts high-quality community structures from networks. To acquire the high-quality must-link and cannot-link constraints, we also propose a semi-supervised component generation algorithm based on active learning, which actively selects nodes with maximum utility for the proposed semi-supervised community detection algorithm step by step, and then generates the must-link and cannot-link constraints by accessing a noiseless oracle. Extensive experiments were carried out, and the experimental results show that the introduction of active learning into the problem of community detection makes a success. Our proposed method can extract high-quality community structures from networks, and significantly outperforms other comparison methods. PMID:25329660

  4. Ensemble method: Community detection based on game theory

    NASA Astrophysics Data System (ADS)

    Zhang, Xia; Xia, Zhengyou; Xu, Shengwu; Wang, J. D.

    2014-08-01

    Timely and cost-effective analytics over social network has emerged as a key ingredient for success in many businesses and government endeavors. Community detection is an active research area of relevance to analyze online social network. The problem of selecting a particular community detection algorithm is crucial if the aim is to unveil the community structure of a network. The choice of a given methodology could affect the outcome of the experiments because different algorithms have different advantages and depend on tuning specific parameters. In this paper, we propose a community division model based on the notion of game theory, which can combine advantages of previous algorithms effectively to get a better community classification result. By making experiments on some standard dataset, it verifies that our community detection model based on game theory is valid and better.

  5. Comparisons of hybrid radiosity-diffusion model and diffusion equation for bioluminescence tomography in cavity cancer detection

    NASA Astrophysics Data System (ADS)

    Chen, Xueli; Yang, Defu; Qu, Xiaochao; Hu, Hao; Liang, Jimin; Gao, Xinbo; Tian, Jie

    2012-06-01

    Bioluminescence tomography (BLT) has been successfully applied to the detection and therapeutic evaluation of solid cancers. However, the existing BLT reconstruction algorithms are not accurate enough for cavity cancer detection because of neglecting the void problem. Motivated by the ability of the hybrid radiosity-diffusion model (HRDM) in describing the light propagation in cavity organs, an HRDM-based BLT reconstruction algorithm was provided for the specific problem of cavity cancer detection. HRDM has been applied to optical tomography but is limited to simple and regular geometries because of the complexity in coupling the boundary between the scattering and void region. In the provided algorithm, HRDM was first applied to three-dimensional complicated and irregular geometries and then employed as the forward light transport model to describe the bioluminescent light propagation in tissues. Combining HRDM with the sparse reconstruction strategy, the cavity cancer cells labeled with bioluminescent probes can be more accurately reconstructed. Compared with the diffusion equation based reconstruction algorithm, the essentiality and superiority of the HRDM-based algorithm were demonstrated with simulation, phantom and animal studies. An in vivo gastric cancer-bearing nude mouse experiment was conducted, whose results revealed the ability and feasibility of the HRDM-based algorithm in the biomedical application of gastric cancer detection.

  6. Comparisons of hybrid radiosity-diffusion model and diffusion equation for bioluminescence tomography in cavity cancer detection.

    PubMed

    Chen, Xueli; Yang, Defu; Qu, Xiaochao; Hu, Hao; Liang, Jimin; Gao, Xinbo; Tian, Jie

    2012-06-01

    Bioluminescence tomography (BLT) has been successfully applied to the detection and therapeutic evaluation of solid cancers. However, the existing BLT reconstruction algorithms are not accurate enough for cavity cancer detection because of neglecting the void problem. Motivated by the ability of the hybrid radiosity-diffusion model (HRDM) in describing the light propagation in cavity organs, an HRDM-based BLT reconstruction algorithm was provided for the specific problem of cavity cancer detection. HRDM has been applied to optical tomography but is limited to simple and regular geometries because of the complexity in coupling the boundary between the scattering and void region. In the provided algorithm, HRDM was first applied to three-dimensional complicated and irregular geometries and then employed as the forward light transport model to describe the bioluminescent light propagation in tissues. Combining HRDM with the sparse reconstruction strategy, the cavity cancer cells labeled with bioluminescent probes can be more accurately reconstructed. Compared with the diffusion equation based reconstruction algorithm, the essentiality and superiority of the HRDM-based algorithm were demonstrated with simulation, phantom and animal studies. An in vivo gastric cancer-bearing nude mouse experiment was conducted, whose results revealed the ability and feasibility of the HRDM-based algorithm in the biomedical application of gastric cancer detection.

  7. Nonstationary EO/IR Clutter Suppression and Dim Object Tracking

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Brown, A.; Brown, J.

    2010-09-01

    We develop and evaluate the performance of advanced algorithms which provide significantly improved capabilities for automated detection and tracking of ballistic and flying dim objects in the presence of highly structured intense clutter. Applications include ballistic missile early warning, midcourse tracking, trajectory prediction, and resident space object detection and tracking. The set of algorithms include, in particular, adaptive spatiotemporal clutter estimation-suppression and nonlinear filtering-based multiple-object track-before-detect. These algorithms are suitable for integration into geostationary, highly elliptical, or low earth orbit scanning or staring sensor suites, and are based on data-driven processing that adapts to real-world clutter backgrounds, including celestial, earth limb, or terrestrial clutter. In many scenarios of interest, e.g., for highly elliptic and, especially, low earth orbits, the resulting clutter is highly nonstationary, providing a significant challenge for clutter suppression to or below sensor noise levels, which is essential for dim object detection and tracking. We demonstrate the success of the developed algorithms using semi-synthetic and real data. In particular, our algorithms are shown to be capable of detecting and tracking point objects with signal-to-clutter levels down to 1/1000 and signal-to-noise levels down to 1/4.

  8. Human Movement Recognition Based on the Stochastic Characterisation of Acceleration Data

    PubMed Central

    Munoz-Organero, Mario; Lotfi, Ahmad

    2016-01-01

    Human activity recognition algorithms based on information obtained from wearable sensors are successfully applied in detecting many basic activities. Identified activities with time-stationary features are characterised inside a predefined temporal window by using different machine learning algorithms on extracted features from the measured data. Better accuracy, precision and recall levels could be achieved by combining the information from different sensors. However, detecting short and sporadic human movements, gestures and actions is still a challenging task. In this paper, a novel algorithm to detect human basic movements from wearable measured data is proposed and evaluated. The proposed algorithm is designed to minimise computational requirements while achieving acceptable accuracy levels based on characterising some particular points in the temporal series obtained from a single sensor. The underlying idea is that this algorithm would be implemented in the sensor device in order to pre-process the sensed data stream before sending the information to a central point combining the information from different sensors to improve accuracy levels. Intra- and inter-person validation is used for two particular cases: single step detection and fall detection and classification using a single tri-axial accelerometer. Relevant results for the above cases and pertinent conclusions are also presented. PMID:27618063

  9. A Novel Artificial Intelligence System for Endotracheal Intubation.

    PubMed

    Carlson, Jestin N; Das, Samarjit; De la Torre, Fernando; Frisch, Adam; Guyette, Francis X; Hodgins, Jessica K; Yealy, Donald M

    2016-01-01

    Adequate visualization of the glottic opening is a key factor to successful endotracheal intubation (ETI); however, few objective tools exist to help guide providers' ETI attempts toward the glottic opening in real-time. Machine learning/artificial intelligence has helped to automate the detection of other visual structures but its utility with ETI is unknown. We sought to test the accuracy of various computer algorithms in identifying the glottic opening, creating a tool that could aid successful intubation. We collected a convenience sample of providers who each performed ETI 10 times on a mannequin using a video laryngoscope (C-MAC, Karl Storz Corp, Tuttlingen, Germany). We recorded each attempt and reviewed one-second time intervals for the presence or absence of the glottic opening. Four different machine learning/artificial intelligence algorithms analyzed each attempt and time point: k-nearest neighbor (KNN), support vector machine (SVM), decision trees, and neural networks (NN). We used half of the videos to train the algorithms and the second half to test the accuracy, sensitivity, and specificity of each algorithm. We enrolled seven providers, three Emergency Medicine attendings, and four paramedic students. From the 70 total recorded laryngoscopic video attempts, we created 2,465 time intervals. The algorithms had the following sensitivity and specificity for detecting the glottic opening: KNN (70%, 90%), SVM (70%, 90%), decision trees (68%, 80%), and NN (72%, 78%). Initial efforts at computer algorithms using artificial intelligence are able to identify the glottic opening with over 80% accuracy. With further refinements, video laryngoscopy has the potential to provide real-time, direction feedback to the provider to help guide successful ETI.

  10. Detecting community structure via the maximal sub-graphs and belonging degrees in complex networks

    NASA Astrophysics Data System (ADS)

    Cui, Yaozu; Wang, Xingyuan; Eustace, Justine

    2014-12-01

    Community structure is a common phenomenon in complex networks, and it has been shown that some communities in complex networks often overlap each other. So in this paper we propose a new algorithm to detect overlapping community structure in complex networks. To identify the overlapping community structure, our algorithm firstly extracts fully connected sub-graphs which are maximal sub-graphs from original networks. Then two maximal sub-graphs having the key pair-vertices can be merged into a new larger sub-graph using some belonging degree functions. Furthermore we extend the modularity function to evaluate the proposed algorithm. In addition, overlapping nodes between communities are founded successfully. Finally we report the comparison between the modularity and the computational complexity of the proposed algorithm with some other existing algorithms. The experimental results show that the proposed algorithm gives satisfactory results.

  11. Detection of protein complex from protein-protein interaction network using Markov clustering

    NASA Astrophysics Data System (ADS)

    Ochieng, P. J.; Kusuma, W. A.; Haryanto, T.

    2017-05-01

    Detection of complexes, or groups of functionally related proteins, is an important challenge while analysing biological networks. However, existing algorithms to identify protein complexes are insufficient when applied to dense networks of experimentally derived interaction data. Therefore, we introduced a graph clustering method based on Markov clustering algorithm to identify protein complex within highly interconnected protein-protein interaction networks. Protein-protein interaction network was first constructed to develop geometrical network, the network was then partitioned using Markov clustering to detect protein complexes. The interest of the proposed method was illustrated by its application to Human Proteins associated to type II diabetes mellitus. Flow simulation of MCL algorithm was initially performed and topological properties of the resultant network were analysed for detection of the protein complex. The results indicated the proposed method successfully detect an overall of 34 complexes with 11 complexes consisting of overlapping modules and 20 non-overlapping modules. The major complex consisted of 102 proteins and 521 interactions with cluster modularity and density of 0.745 and 0.101 respectively. The comparison analysis revealed MCL out perform AP, MCODE and SCPS algorithms with high clustering coefficient (0.751) network density and modularity index (0.630). This demonstrated MCL was the most reliable and efficient graph clustering algorithm for detection of protein complexes from PPI networks.

  12. Algorithms for the detection of chewing behavior in dietary monitoring applications

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Helal, Abdelsalam; Mendez-Vasquez, Andres

    2009-08-01

    The detection of food consumption is key to the implementation of successful behavior modification in support of dietary monitoring and therapy, for example, during the course of controlling obesity, diabetes, or cardiovascular disease. Since the vast majority of humans consume food via mastication (chewing), we have designed an algorithm that automatically detects chewing behaviors in surveillance video of a person eating. Our algorithm first detects the mouth region, then computes the spatiotemporal frequency spectrum of a small perioral region (including the mouth). Spectral data are analyzed to determine the presence of periodic motion that characterizes chewing. A classifier is then applied to discriminate different types of chewing behaviors. Our algorithm was tested on seven volunteers, whose behaviors included chewing with mouth open, chewing with mouth closed, talking, static face presentation (control case), and moving face presentation. Early test results show that the chewing behaviors induce a temporal frequency peak at 0.5Hz to 2.5Hz, which is readily detected using a distance-based classifier. Computational cost is analyzed for implementation on embedded processing nodes, for example, in a healthcare sensor network. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm, and its estimated error. It is shown that chewing detection is possible within a computationally efficient, accurate, and subject-independent framework.

  13. Fast sparse Raman spectral unmixing for chemical fingerprinting and quantification

    NASA Astrophysics Data System (ADS)

    Yaghoobi, Mehrdad; Wu, Di; Clewes, Rhea J.; Davies, Mike E.

    2016-10-01

    Raman spectroscopy is a well-established spectroscopic method for the detection of condensed phase chemicals. It is based on scattered light from exposure of a target material to a narrowband laser beam. The information generated enables presumptive identification from measuring correlation with library spectra. Whilst this approach is successful in identification of chemical information of samples with one component, it is more difficult to apply to spectral mixtures. The capability of handling spectral mixtures is crucial for defence and security applications as hazardous materials may be present as mixtures due to the presence of degradation, interferents or precursors. A novel method for spectral unmixing is proposed here. Most modern decomposition techniques are based on the sparse decomposition of mixture and the application of extra constraints to preserve the sum of concentrations. These methods have often been proposed for passive spectroscopy, where spectral baseline correction is not required. Most successful methods are computationally expensive, e.g. convex optimisation and Bayesian approaches. We present a novel low complexity sparsity based method to decompose the spectra using a reference library of spectra. It can be implemented on a hand-held spectrometer in near to real-time. The algorithm is based on iteratively subtracting the contribution of selected spectra and updating the contribution of each spectrum. The core algorithm is called fast non-negative orthogonal matching pursuit, which has been proposed by the authors in the context of nonnegative sparse representations. The iteration terminates when the maximum number of expected chemicals has been found or the residual spectrum has a negligible energy, i.e. in the order of the noise level. A backtracking step removes the least contributing spectrum from the list of detected chemicals and reports it as an alternative component. This feature is particularly useful in detection of chemicals with small contributions, which are normally not detected. The proposed algorithm is easily reconfigurable to include new library entries and optional preferential threat searches in the presence of predetermined threat indicators. Under Ministry of Defence funding, we have demonstrated the algorithm for fingerprinting and rough quantification of the concentration of chemical mixtures using a set of reference spectral mixtures. In our experiments, the algorithm successfully managed to detect the chemicals with concentrations below 10 percent. The running time of the algorithm is in the order of one second, using a single core of a desktop computer.

  14. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    NASA Astrophysics Data System (ADS)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  15. Denoising of gravitational wave signals via dictionary learning algorithms

    NASA Astrophysics Data System (ADS)

    Torres-Forné, Alejandro; Marquina, Antonio; Font, José A.; Ibáñez, José M.

    2016-12-01

    Gravitational wave astronomy has become a reality after the historical detections accomplished during the first observing run of the two advanced LIGO detectors. In the following years, the number of detections is expected to increase significantly with the full commissioning of the advanced LIGO, advanced Virgo and KAGRA detectors. The development of sophisticated data analysis techniques to improve the opportunities of detection for low signal-to-noise-ratio events is, hence, a most crucial effort. In this paper, we present one such technique, dictionary-learning algorithms, which have been extensively developed in the last few years and successfully applied mostly in the context of image processing. However, to the best of our knowledge, such algorithms have not yet been employed to denoise gravitational wave signals. By building dictionaries from numerical relativity templates of both binary black holes mergers and bursts of rotational core collapse, we show how machine-learning algorithms based on dictionaries can also be successfully applied for gravitational wave denoising. We use a subset of signals from both catalogs, embedded in nonwhite Gaussian noise, to assess our techniques with a large sample of tests and to find the best model parameters. The application of our method to the actual signal GW150914 shows promising results. Dictionary-learning algorithms could be a complementary addition to the gravitational wave data analysis toolkit. They may be used to extract signals from noise and to infer physical parameters if the data are in good enough agreement with the morphology of the dictionary atoms.

  16. Electroencephalogram Signal Classification for Automated Epileptic Seizure Detection Using Genetic Algorithm

    PubMed Central

    Nanthini, B. Suguna; Santhi, B.

    2017-01-01

    Background: Epilepsy causes when the repeated seizure occurs in the brain. Electroencephalogram (EEG) test provides valuable information about the brain functions and can be useful to detect brain disorder, especially for epilepsy. In this study, application for an automated seizure detection model has been introduced successfully. Materials and Methods: The EEG signals are decomposed into sub-bands by discrete wavelet transform using db2 (daubechies) wavelet. The eight statistical features, the four gray level co-occurrence matrix and Renyi entropy estimation with four different degrees of order, are extracted from the raw EEG and its sub-bands. Genetic algorithm (GA) is used to select eight relevant features from the 16 dimension features. The model has been trained and tested using support vector machine (SVM) classifier successfully for EEG signals. The performance of the SVM classifier is evaluated for two different databases. Results: The study has been experimented through two different analyses and achieved satisfactory performance for automated seizure detection using relevant features as the input to the SVM classifier. Conclusion: Relevant features using GA give better accuracy performance for seizure detection. PMID:28781480

  17. Pulsation Detection from Noisy Ultrasound-Echo Moving Images of Newborn Baby Head Using Fourier Transform

    NASA Astrophysics Data System (ADS)

    Yamada, Masayoshi; Fukuzawa, Masayuki; Kitsunezuka, Yoshiki; Kishida, Jun; Nakamori, Nobuyuki; Kanamori, Hitoshi; Sakurai, Takashi; Kodama, Souichi

    1995-05-01

    In order to detect pulsation from a series of noisy ultrasound-echo moving images of a newborn baby's head for pediatric diagnosis, a digital image processing system capable of recording at the video rate and processing the recorded series of images was constructed. The time-sequence variations of each pixel value in a series of moving images were analyzed and then an algorithm based on Fourier transform was developed for the pulsation detection, noting that the pulsation associated with blood flow was periodically changed by heartbeat. Pulsation detection for pediatric diagnosis was successfully made from a series of noisy ultrasound-echo moving images of newborn baby's head by using the image processing system and the pulsation detection algorithm developed here.

  18. Medical chart validation of an algorithm for identifying multiple sclerosis relapse in healthcare claims.

    PubMed

    Chastek, Benjamin J; Oleen-Burkey, Merrikay; Lopez-Bresnahan, Maria V

    2010-01-01

    Relapse is a common measure of disease activity in relapsing-remitting multiple sclerosis (MS). The objective of this study was to test the content validity of an operational algorithm for detecting relapse in claims data. A claims-based relapse detection algorithm was tested by comparing its detection rate over a 1-year period with relapses identified based on medical chart review. According to the algorithm, MS patients in a US healthcare claims database who had either (1) a primary claim for MS during hospitalization or (2) a corticosteroid claim following a MS-related outpatient visit were designated as having a relapse. Patient charts were examined for explicit indication of relapse or care suggestive of relapse. Positive and negative predictive values were calculated. Medical charts were reviewed for 300 MS patients, half of whom had a relapse according to the algorithm. The claims-based criteria correctly classified 67.3% of patients with relapses (positive predictive value) and 70.0% of patients without relapses (negative predictive value; kappa 0.373: p < 0.001). Alternative algorithms did not improve on the predictive value of the operational algorithm. Limitations of the algorithm include lack of differentiation between relapsing-remitting MS and other types, and that it does not incorporate measures of function and disability. The claims-based algorithm appeared to successfully detect moderate-to-severe MS relapse. This validated definition can be applied to future claims-based MS studies.

  19. Using advanced computer vision algorithms on small mobile robots

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.

    2006-05-01

    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

  20. A Novel Segment-Based Approach for Improving Classification Performance of Transport Mode Detection.

    PubMed

    Guvensan, M Amac; Dusun, Burak; Can, Baris; Turkmen, H Irem

    2017-12-30

    Transportation planning and solutions have an enormous impact on city life. To minimize the transport duration, urban planners should understand and elaborate the mobility of a city. Thus, researchers look toward monitoring people's daily activities including transportation types and duration by taking advantage of individual's smartphones. This paper introduces a novel segment-based transport mode detection architecture in order to improve the results of traditional classification algorithms in the literature. The proposed post-processing algorithm, namely the Healing algorithm, aims to correct the misclassification results of machine learning-based solutions. Our real-life test results show that the Healing algorithm could achieve up to 40% improvement of the classification results. As a result, the implemented mobile application could predict eight classes including stationary, walking, car, bus, tram, train, metro and ferry with a success rate of 95% thanks to the proposed multi-tier architecture and Healing algorithm.

  1. Autofocusing and Polar Body Detection in Automated Cell Manipulation.

    PubMed

    Wang, Zenan; Feng, Chen; Ang, Wei Tech; Tan, Steven Yih Min; Latt, Win Tun

    2017-05-01

    Autofocusing and feature detection are two essential processes for performing automated biological cell manipulation tasks. In this paper, we have introduced a technique capable of focusing on a holding pipette and a mammalian cell under a bright-field microscope automatically, and a technique that can detect and track the presence and orientation of the polar body of an oocyte that is rotated at the tip of a micropipette. Both algorithms were evaluated by using mouse oocytes. Experimental results show that both algorithms achieve very high success rates: 100% and 96%. As robust and accurate image processing methods, they can be widely applied to perform various automated biological cell manipulations.

  2. Automatic and Robust Delineation of the Fiducial Points of the Seismocardiogram Signal for Non-invasive Estimation of Cardiac Time Intervals.

    PubMed

    Khosrow-Khavar, Farzad; Tavakolian, Kouhyar; Blaber, Andrew; Menon, Carlo

    2016-10-12

    The purpose of this research was to design a delineation algorithm that could detect specific fiducial points of the seismocardiogram (SCG) signal with or without using the electrocardiogram (ECG) R-wave as the reference point. The detected fiducial points were used to estimate cardiac time intervals. Due to complexity and sensitivity of the SCG signal, the algorithm was designed to robustly discard the low-quality cardiac cycles, which are the ones that contain unrecognizable fiducial points. The algorithm was trained on a dataset containing 48,318 manually annotated cardiac cycles. It was then applied to three test datasets: 65 young healthy individuals (dataset 1), 15 individuals above 44 years old (dataset 2), and 25 patients with previous heart conditions (dataset 3). The algorithm accomplished high prediction accuracy with the rootmean- square-error of less than 5 ms for all the test datasets. The algorithm overall mean detection rate per individual recordings (DRI) were 74, 68, and 42 percent for the three test datasets when concurrent ECG and SCG were used. For the standalone SCG case, the mean DRI was 32, 14 and 21 percent. When the proposed algorithm applied to concurrent ECG and SCG signals, the desired fiducial points of the SCG signal were successfully estimated with a high detection rate. For the standalone case, however, the algorithm achieved high prediction accuracy and detection rate for only the young individual dataset. The presented algorithm could be used for accurate and non-invasive estimation of cardiac time intervals.

  3. Quantitative three-dimensional transrectal ultrasound (TRUS) for prostate imaging

    NASA Astrophysics Data System (ADS)

    Pathak, Sayan D.; Aarnink, Rene G.; de la Rosette, Jean J.; Chalana, Vikram; Wijkstra, Hessel; Haynor, David R.; Debruyne, Frans M. J.; Kim, Yongmin

    1998-06-01

    With the number of men seeking medical care for prostate diseases rising steadily, the need of a fast and accurate prostate boundary detection and volume estimation tool is being increasingly experienced by the clinicians. Currently, these measurements are made manually, which results in a large examination time. A possible solution is to improve the efficiency by automating the boundary detection and volume estimation process with minimal involvement from the human experts. In this paper, we present an algorithm based on SNAKES to detect the boundaries. Our approach is to selectively enhance the contrast along the edges using an algorithm called sticks and integrate it with a SNAKES model. This integrated algorithm requires an initial curve for each ultrasound image to initiate the boundary detection process. We have used different schemes to generate the curves with a varying degree of automation and evaluated its effects on the algorithm performance. After the boundaries are identified, the prostate volume is calculated using planimetric volumetry. We have tested our algorithm on 6 different prostate volumes and compared the performance against the volumes manually measured by 3 experts. With the increase in the user inputs, the algorithm performance improved as expected. The results demonstrate that given an initial contour reasonably close to the prostate boundaries, the algorithm successfully delineates the prostate boundaries in an image, and the resulting volume measurements are in close agreement with those made by the human experts.

  4. Multiple objects tracking in fluorescence microscopy.

    PubMed

    Kalaidzidis, Yannis

    2009-01-01

    Many processes in cell biology are connected to the movement of compact entities: intracellular vesicles and even single molecules. The tracking of individual objects is important for understanding cellular dynamics. Here we describe the tracking algorithms which have been developed in the non-biological fields and successfully applied to object detection and tracking in biological applications. The characteristics features of the different algorithms are compared.

  5. Autopiquer - a Robust and Reliable Peak Detection Algorithm for Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Kilgour, David P. A.; Hughes, Sam; Kilgour, Samantha L.; Mackay, C. Logan; Palmblad, Magnus; Tran, Bao Quoc; Goo, Young Ah; Ernst, Robert K.; Clarke, David J.; Goodlett, David R.

    2017-02-01

    We present a simple algorithm for robust and unsupervised peak detection by determining a noise threshold in isotopically resolved mass spectrometry data. Solving this problem will greatly reduce the subjective and time-consuming manual picking of mass spectral peaks and so will prove beneficial in many research applications. The Autopiquer approach uses autocorrelation to test for the presence of (isotopic) structure in overlapping windows across the spectrum. Within each window, a noise threshold is optimized to remove the most unstructured data, whilst keeping as much of the (isotopic) structure as possible. This algorithm has been successfully demonstrated for both peak detection and spectral compression on data from many different classes of mass spectrometer and for different sample types, and this approach should also be extendible to other types of data that contain regularly spaced discrete peaks.

  6. Autopiquer - a Robust and Reliable Peak Detection Algorithm for Mass Spectrometry.

    PubMed

    Kilgour, David P A; Hughes, Sam; Kilgour, Samantha L; Mackay, C Logan; Palmblad, Magnus; Tran, Bao Quoc; Goo, Young Ah; Ernst, Robert K; Clarke, David J; Goodlett, David R

    2017-02-01

    We present a simple algorithm for robust and unsupervised peak detection by determining a noise threshold in isotopically resolved mass spectrometry data. Solving this problem will greatly reduce the subjective and time-consuming manual picking of mass spectral peaks and so will prove beneficial in many research applications. The Autopiquer approach uses autocorrelation to test for the presence of (isotopic) structure in overlapping windows across the spectrum. Within each window, a noise threshold is optimized to remove the most unstructured data, whilst keeping as much of the (isotopic) structure as possible. This algorithm has been successfully demonstrated for both peak detection and spectral compression on data from many different classes of mass spectrometer and for different sample types, and this approach should also be extendible to other types of data that contain regularly spaced discrete peaks. Graphical Abstract ᅟ.

  7. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  8. Analytical redundancy management mechanization and flight data analysis for the F-8 digital fly-by-wire aircraft flight control sensors

    NASA Technical Reports Server (NTRS)

    Deckert, J. C.

    1983-01-01

    The details are presented of an onboard digital computer algorithm designed to reliably detect and isolate the first failure in a duplex set of flight control sensors aboard the NASA F-8 digital fly-by-wire aircraft. The algorithm's successful flight test program is summarized, and specific examples are presented of algorithm behavior in response to software-induced signal faults, both with and without aircraft parameter modeling errors.

  9. Detection and Counting of Orchard Trees from Vhr Images Using a Geometrical-Optical Model and Marked Template Matching

    NASA Astrophysics Data System (ADS)

    Maillard, Philippe; Gomes, Marília F.

    2016-06-01

    This article presents an original algorithm created to detect and count trees in orchards using very high resolution images. The algorithm is based on an adaptation of the "template matching" image processing approach, in which the template is based on a "geometricaloptical" model created from a series of parameters, such as illumination angles, maximum and ambient radiance, and tree size specifications. The algorithm is tested on four images from different regions of the world and different crop types. These images all have < 1 meter spatial resolution and were downloaded from the GoogleEarth application. Results show that the algorithm is very efficient at detecting and counting trees as long as their spectral and spatial characteristics are relatively constant. For walnut, mango and orange trees, the overall accuracy was clearly above 90%. However, the overall success rate for apple trees fell under 75%. It appears that the openness of the apple tree crown is most probably responsible for this poorer result. The algorithm is fully explained with a step-by-step description. At this stage, the algorithm still requires quite a bit of user interaction. The automatic determination of most of the required parameters is under development.

  10. Evaluation of hybrids algorithms for mass detection in digitalized mammograms

    NASA Astrophysics Data System (ADS)

    Cordero, José; Garzón Reyes, Johnson

    2011-01-01

    The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.

  11. A deviation based assessment methodology for multiple machine health patterns classification and fault detection

    NASA Astrophysics Data System (ADS)

    Jia, Xiaodong; Jin, Chao; Buzza, Matt; Di, Yuan; Siegel, David; Lee, Jay

    2018-01-01

    Successful applications of Diffusion Map (DM) in machine failure detection and diagnosis have been reported in several recent studies. DM provides an efficient way to visualize the high-dimensional, complex and nonlinear machine data, and thus suggests more knowledge about the machine under monitoring. In this paper, a DM based methodology named as DM-EVD is proposed for machine degradation assessment, abnormality detection and diagnosis in an online fashion. Several limitations and challenges of using DM for machine health monitoring have been analyzed and addressed. Based on the proposed DM-EVD, a deviation based methodology is then proposed to include more dimension reduction methods. In this work, the incorporation of Laplacian Eigen-map and Principal Component Analysis (PCA) are explored, and the latter algorithm is named as PCA-Dev and is validated in the case study. To show the successful application of the proposed methodology, case studies from diverse fields are presented and investigated in this work. Improved results are reported by benchmarking with other machine learning algorithms.

  12. Real-time test of MOCS algorithm during Superflux 1980. [ocean color algorithm for remotely detecting suspended solids

    NASA Technical Reports Server (NTRS)

    Grew, G. W.

    1981-01-01

    A remote sensing experiment was conducted in which success depended upon the real-time use of an algorithm, generated from MOCS (multichannel ocean color sensor) data onboard the NASA P-3 aircraft, to direct the NOAA ship Kelez to oceanic stations where vitally needed sea truth could be collected. Remote data sets collected on two consecutive days of the mission were consistent with the sea truth for low concentrations of chlorophyll a. Two oceanic regions of special interest were located. The algorithm and the collected data are described.

  13. Uncovering the overlapping community structure of complex networks by maximal cliques

    NASA Astrophysics Data System (ADS)

    Li, Junqiu; Wang, Xingyuan; Cui, Yaozu

    2014-12-01

    In this paper, a unique algorithm is proposed to detect overlapping communities in the un-weighted and weighted networks with considerable accuracy. The maximal cliques, overlapping vertex, bridge vertex and isolated vertex are introduced. First, all the maximal cliques are extracted by the algorithm based on the deep and bread searching. Then two maximal cliques can be merged into a larger sub-graph by some given rules. In addition, the proposed algorithm successfully finds overlapping vertices and bridge vertices between communities. Experimental results using some real-world networks data show that the performance of the proposed algorithm is satisfactory.

  14. Fusion of multiple quadratic penalty function support vector machines (QPFSVM) for automated sea mine detection and classification

    NASA Astrophysics Data System (ADS)

    Dobeck, Gerald J.; Cobb, J. Tory

    2002-08-01

    The high-resolution sonar is one of the principal sensors used by the Navy to detect and classify sea mines in minehunting operations. For such sonar systems, substantial effort has been devoted to the development of automated detection and classification (D/C) algorithms. These have been spurred by several factors including (1) aids for operators to reduce work overload, (2) more optimal use of all available data, and (3) the introduction of unmanned minehunting systems. The environments where sea mines are typically laid (harbor areas, shipping lanes, and the littorals) give rise to many false alarms caused by natural, biologic, and man-made clutter. The objective of the automated D/C algorithms is to eliminate most of these false alarms while still maintaining a very high probability of mine detection and classification (PdPc). In recent years, the benefits of fusing the outputs of multiple D/C algorithms have been studied. We refer to this as Algorithm Fusion. The results have been remarkable, including reliable robustness to new environments. The Quadratic Penalty Function Support Vector Machine (QPFSVM) algorithm to aid in the automated detection and classification of sea mines is introduced in this paper. The QPFSVM algorithm is easy to train, simple to implement, and robust to feature space dimension. Outputs of successive SVM algorithms are cascaded in stages (fused) to improve the Probability of Classification (Pc) and reduce the number of false alarms. Even though our experience has been gained in the area of sea mine detection and classification, the principles described herein are general and can be applied to fusion of any D/C problem (e.g., automated medical diagnosis or automatic target recognition for ballistic missile defense).

  15. Acoustic intrusion detection and positioning system

    NASA Astrophysics Data System (ADS)

    Berman, Ohad; Zalevsky, Zeev

    2002-08-01

    Acoustic sensors are becoming more and more applicable as a military battlefield technology. Those sensors allow a detection and direciton estimation with low false alarm rate and high probability of detection. The recent technological progress related to these fields of reserach, together with an evolution of sophisticated algorithms, allow the successful integration of those sensoe in battlefield technologies. In this paper the performances of an acoustic sensor for a detection of avionic vessels is investigated and analyzed.

  16. A Raman chemical imaging system for detection of contaminants in food

    NASA Astrophysics Data System (ADS)

    Chao, Kaunglin; Qin, Jianwei; Kim, Moon S.; Mo, Chang Yeon

    2011-06-01

    This study presented a preliminary investigation into the use of macro-scale Raman chemical imaging for the screening of dry milk powder for the presence of chemical contaminants. Melamine was mixed into dry milk at concentrations (w/w) of 0.2%, 0.5%, 1.0%, 2.0%, 5.0%, and 10.0% and images of the mixtures were analyzed by a spectral information divergence algorithm. Ammonium sulfate, dicyandiamide, and urea were each separately mixed into dry milk at concentrations of (w/w) of 0.5%, 1.0%, and 5.0%, and an algorithm based on self-modeling mixture analysis was applied to these sample images. The contaminants were successfully detected and the spatial distribution of the contaminants within the sample mixtures was visualized using these algorithms. Although further studies are necessary, macro-scale Raman chemical imaging shows promise for use in detecting contaminants in food ingredients and may also be useful for authentication of food ingredients.

  17. Contrast, size, and orientation-invariant target detection in infrared imagery

    NASA Astrophysics Data System (ADS)

    Zhou, Yi-Tong; Crawshaw, Richard D.

    1991-08-01

    Automatic target detection in IR imagery is a very difficult task due to variations in target brightness, shape, size, and orientation. In this paper, the authors present a contrast, size, and orientation invariant algorithm based on Gabor functions for detecting targets from a single IR image frame. The algorithms consists of three steps. First, it locates potential targets by using low-resolution Gabor functions which resist noise and background clutter effects, then, it removes false targets and eliminates redundant target points based on a similarity measure. These two steps mimic human vision processing but are different from Zeevi's Foveating Vision System. Finally, it uses both low- and high-resolution Gabor functions to verify target existence. This algorithm has been successfully tested on several IR images that contain multiple examples of military vehicles with different size and brightness in various background scenes and orientations.

  18. Data Mining: The Art of Automated Knowledge Extraction

    NASA Astrophysics Data System (ADS)

    Karimabadi, H.; Sipes, T.

    2012-12-01

    Data mining algorithms are used routinely in a wide variety of fields and they are gaining adoption in sciences. The realities of real world data analysis are that (a) data has flaws, and (b) the models and assumptions that we bring to the data are inevitably flawed, and/or biased and misspecified in some way. Data mining can improve data analysis by detecting anomalies in the data, check for consistency of the user model assumptions, and decipher complex patterns and relationships that would not be possible otherwise. The common form of data collected from in situ spacecraft measurements is multi-variate time series which represents one of the most challenging problems in data mining. We have successfully developed algorithms to deal with such data and have extended the algorithms to handle streaming data. In this talk, we illustrate the utility of our algorithms through several examples including automated detection of reconnection exhausts in the solar wind and flux ropes in the magnetotail. We also show examples from successful applications of our technique to analysis of 3D kinetic simulations. With an eye to the future, we provide an overview of our upcoming plans that include collaborative data mining, expert outsourcing data mining, computer vision for image analysis, among others. Finally, we discuss the integration of data mining algorithms with web-based services such as VxOs and other Heliophysics data centers and the resulting capabilities that it would enable.

  19. Assessing and validating RST-FIRES on MSG-SEVIRI data by means a Total Validation Approach (TVA).

    NASA Astrophysics Data System (ADS)

    Filizzola, Carolina; Corrado, Rosita; Marchese, Francesco; Mazzeo, %Giuseppe; Paciello, Rossana; Pergola, Nicola; Tramutoli, Valerio

    2015-04-01

    Several fire detection methods have been developed through the years for detecting forest fires from space. These algorithms (which may be grouped in single channel, multichannel and contextual algorithms) are generally based on the use of fixed thresholds that, being intrinsically exposed to false alarm proliferation, are often used in a conservative way. As a consequence, most of satellite-based algorithms for fire detection show low sensitivity resulting not suitable in operational contexts. In this work, the RST-FIRES algorithm, which is based on an original multi-temporal scheme of satellite data analysis (RST-Robust Satellite Techniques), is presented. The implementation of RST-FIRES on data provided by Spinning Enhanced Visible and InfraRed Imager (SEVIRI) onboard Meteosat Second Generation (MSG) that, offering the best revisit time (i.e. 15 minutes), can be successfully used for detecting fires at early stage, is described here. Moreover, results of a Total Validation Approach (TVA) experimented both in Northern and Southern Italy, in collaboration with local and regional civil protection agencies, are also reported. In particular, TVA allowed us to assess RST-FIRES detections by means of ground check and aerial surveys, demonstrating the good performances offered by RST-FIRES using MSG-SEVIRI data. Indeed, this algorithm was capable of detecting several fires that for their features (e.g., small size, short time duration) would not have appeared in the official reports, highlighting a significant improvement in terms of sensitivity in comparison with other established satellite-based fire detection techniques still preserving a high confidence level of detection.

  20. Explosive Detection in Aviation Applications Using CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martz, H E; Crawford, C R

    2011-02-15

    CT scanners are deployed world-wide to detect explosives in checked and carry-on baggage. Though very similar to single- and dual-energy multi-slice CT scanners used today in medical imaging, some recently developed explosives detection scanners employ multiple sources and detector arrays to eliminate mechanical rotation of a gantry, photon counting detectors for spectral imaging, and limited number of views to reduce cost. For each bag scanned, the resulting reconstructed images are first processed by automated threat recognition algorithms to screen for explosives and other threats. Human operators review the images only when these automated algorithms report the presence of possible threats.more » The US Department of Homeland Security (DHS) has requirements for future scanners that include dealing with a larger number of threats, higher probability of detection, lower false alarm rates and lower operating costs. One tactic that DHS is pursuing to achieve these requirements is to augment the capabilities of the established security vendors with third-party algorithm developers. A third-party in this context refers to academics and companies other than the established vendors. DHS is particularly interested in exploring the model that has been used very successfully by the medical imaging industry, in which university researchers develop algorithms that are eventually deployed in commercial medical imaging equipment. The purpose of this paper is to discuss opportunities for third-parties to develop advanced reconstruction and threat detection algorithms.« less

  1. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  2. Near-infrared face recognition utilizing open CV software

    NASA Astrophysics Data System (ADS)

    Sellami, Louiza; Ngo, Hau; Fowler, Chris J.; Kearney, Liam M.

    2014-06-01

    Commercially available hardware, freely available algorithms, and authors' developed software are synergized successfully to detect and recognize subjects in an environment without visible light. This project integrates three major components: an illumination device operating in near infrared (NIR) spectrum, a NIR capable camera and a software algorithm capable of performing image manipulation, facial detection and recognition. Focusing our efforts in the near infrared spectrum allows the low budget system to operate covertly while still allowing for accurate face recognition. In doing so a valuable function has been developed which presents potential benefits in future civilian and military security and surveillance operations.

  3. Rapid detection of talcum powder in tea using FT-IR spectroscopy coupled with chemometrics

    PubMed Central

    Li, Xiaoli; Zhang, Yuying; He, Yong

    2016-01-01

    This paper investigated the feasibility of Fourier transform infrared transmission (FT-IR) spectroscopy to detect talcum powder illegally added in tea based on chemometric methods. Firstly, 210 samples of tea powder with 13 dose levels of talcum powder were prepared for FT-IR spectra acquirement. In order to highlight the slight variations in FT-IR spectra, smoothing, normalize and standard normal variate (SNV) were employed to preprocess the raw spectra. Among them, SNV preprocessing had the best performance with high correlation of prediction (RP = 0.948) and low root mean square error of prediction (RMSEP = 0.108) of partial least squares (PLS) model. Then 18 characteristic wavenumbers were selected based on a hybrid of backward interval partial least squares (biPLS) regression, competitive adaptive reweighted sampling (CARS) algorithm and successive projections algorithm (SPA). These characteristic wavenumbers only accounted for 0.64% of the full wavenumbers. Following that, 18 characteristic wavenumbers were used to build linear and nonlinear determination models by PLS regression and extreme learning machine (ELM), respectively. The optimal model with RP = 0.963 and RMSEP = 0.137 was achieved by ELM algorithm. These results demonstrated that FT-IR spectroscopy with chemometrics could be used successfully to detect talcum powder in tea. PMID:27468701

  4. A cloud detection algorithm using the downwelling infrared radiance measured by an infrared pyrometer of the ground-based microwave radiometer

    DOE PAGES

    Ahn, M. H.; Han, D.; Won, H. Y.; ...

    2015-02-03

    For better utilization of the ground-based microwave radiometer, it is important to detect the cloud presence in the measured data. Here, we introduce a simple and fast cloud detection algorithm by using the optical characteristics of the clouds in the infrared atmospheric window region. The new algorithm utilizes the brightness temperature (Tb) measured by an infrared radiometer installed on top of a microwave radiometer. The two-step algorithm consists of a spectral test followed by a temporal test. The measured Tb is first compared with a predicted clear-sky Tb obtained by an empirical formula as a function of surface air temperaturemore » and water vapor pressure. For the temporal test, the temporal variability of the measured Tb during one minute compares with a dynamic threshold value, representing the variability of clear-sky conditions. It is designated as cloud-free data only when both the spectral and temporal tests confirm cloud-free data. Overall, most of the thick and uniform clouds are successfully detected by the spectral test, while the broken and fast-varying clouds are detected by the temporal test. The algorithm is validated by comparison with the collocated ceilometer data for six months, from January to June 2013. The overall proportion of correctness is about 88.3% and the probability of detection is 90.8%, which are comparable with or better than those of previous similar approaches. Two thirds of discrepancies occur when the new algorithm detects clouds while the ceilometer does not, resulting in different values of the probability of detection with different cloud-base altitude, 93.8, 90.3, and 82.8% for low, mid, and high clouds, respectively. Finally, due to the characteristics of the spectral range, the new algorithm is found to be insensitive to the presence of inversion layers.« less

  5. The Genetic-Algorithm-Based Normal Boundary Intersection (GANBI) Method; An Efficient Approach to Pareto Multiobjective Optimization for Engineering Design

    DTIC Science & Technology

    2006-05-15

    alarm performance in a cost-effective manner is the use of track - before - detect strategies, in which multiple sensor detections must occur within the...corresponding to the traditional sensor coverage problem. Also, in the track - before - detect context, reference is made to the field-level functions of...detection and false alarm as successful search and false search, respectively, because the track - before - detect process serves as a searching function

  6. Combinatorial Color Space Models for Skin Detection in Sub-continental Human Images

    NASA Astrophysics Data System (ADS)

    Khaled, Shah Mostafa; Saiful Islam, Md.; Rabbani, Md. Golam; Tabassum, Mirza Rehenuma; Gias, Alim Ul; Kamal, Md. Mostafa; Muctadir, Hossain Muhammad; Shakir, Asif Khan; Imran, Asif; Islam, Saiful

    Among different color models HSV, HLS, YIQ, YCbCr, YUV, etc. have been most popular for skin detection. Most of the research done in the field of skin detection has been trained and tested on human images of African, Mongolian and Anglo-Saxon ethnic origins, skin colors of Indian sub-continentals have not been focused separately. Combinatorial algorithms, without affecting asymptotic complexity can be developed using the skin detection concepts of these color models for boosting detection performance. In this paper a comparative study of different combinatorial skin detection algorithms have been made. For training and testing 200 images (skin and non skin) containing pictures of sub-continental male and females have been used to measure the performance of the combinatorial approaches, and considerable development in success rate with True Positive of 99.5% and True Negative of 93.3% have been observed.

  7. Comparison and optimization of radar-based hail detection algorithms in Slovenia

    NASA Astrophysics Data System (ADS)

    Stržinar, Gregor; Skok, Gregor

    2018-05-01

    Four commonly used radar-based hail detection algorithms are evaluated and optimized in Slovenia. The algorithms are verified against ground observations of hail at manned stations in the period between May and August, from 2002 to 2010. The algorithms are optimized by determining the optimal values of all possible algorithm parameters. A number of different contingency-table-based scores are evaluated with a combination of Critical Success Index and frequency bias proving to be the best choice for optimization. The best performance indexes are given by Waldvogel and the severe hail index, followed by vertically integrated liquid and maximum radar reflectivity. Using the optimal parameter values, a hail frequency climatology map for the whole of Slovenia is produced. The analysis shows that there is a considerable variability of hail occurrence within the Republic of Slovenia. The hail frequency ranges from almost 0 to 1.7 hail days per year with an average value of about 0.7 hail days per year.

  8. Application of Novel Software Algorithms to Spectral-Domain Optical Coherence Tomography for Automated Detection of Diabetic Retinopathy.

    PubMed

    Adhi, Mehreen; Semy, Salim K; Stein, David W; Potter, Daniel M; Kuklinski, Walter S; Sleeper, Harry A; Duker, Jay S; Waheed, Nadia K

    2016-05-01

    To present novel software algorithms applied to spectral-domain optical coherence tomography (SD-OCT) for automated detection of diabetic retinopathy (DR). Thirty-one diabetic patients (44 eyes) and 18 healthy, nondiabetic controls (20 eyes) who underwent volumetric SD-OCT imaging and fundus photography were retrospectively identified. A retina specialist independently graded DR stage. Trained automated software generated a retinal thickness score signifying macular edema and a cluster score signifying microaneurysms and/or hard exudates for each volumetric SD-OCT. Of 44 diabetic eyes, 38 had DR and six eyes did not have DR. Leave-one-out cross-validation using a linear discriminant at missed detection/false alarm ratio of 3.00 computed software sensitivity and specificity of 92% and 69%, respectively, for DR detection when compared to clinical assessment. Novel software algorithms applied to commercially available SD-OCT can successfully detect DR and may have potential as a viable screening tool for DR in future. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:410-417.]. Copyright 2016, SLACK Incorporated.

  9. White blood cell segmentation by circle detection using electromagnetism-like optimization.

    PubMed

    Cuevas, Erik; Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability.

  10. White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization

    PubMed Central

    Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability. PMID:23476713

  11. Automated attribution of remotely-sensed ecological disturbances using spatial and temporal characteristics of common disturbance classes.

    NASA Astrophysics Data System (ADS)

    Cooper, L. A.; Ballantyne, A.

    2017-12-01

    Forest disturbances are critical components of ecosystems. Knowledge of their prevalence and impacts is necessary to accurately describe forest health and ecosystem services through time. While there are currently several methods available to identify and describe forest disturbances, especially those which occur in North America, the process remains inefficient and inaccessible in many parts of the world. Here, we introduce a preliminary approach to streamline and automate both the detection and attribution of forest disturbances. We use a combination of the Breaks for Additive Season and Trend (BFAST) detection algorithm to detect disturbances in combination with supervised and unsupervised classification algorithms to attribute the detections to disturbance classes. Both spatial and temporal disturbance characteristics are derived and utilized for the goal of automating the disturbance attribution process. The resulting preliminary algorithm is applied to up-scaled (100m) Landsat data for several different ecosystems in North America, with varying success. Our results indicate that supervised classification is more reliable than unsupervised classification, but that limited training data are required for a region. Future work will improve the algorithm through refining and validating at sites within North America before applying this approach globally.

  12. CMDR based differential evolution identifies the epistatic interaction in genome-wide association studies.

    PubMed

    Yang, Cheng-Hong; Chuang, Li-Yeh; Lin, Yu-Da

    2017-08-01

    Detecting epistatic interactions in genome-wide association studies (GWAS) is a computational challenge. Such huge numbers of single-nucleotide polymorphism (SNP) combinations limit the some of the powerful algorithms to be applied to detect the potential epistasis in large-scale SNP datasets. We propose a new algorithm which combines the differential evolution (DE) algorithm with a classification based multifactor-dimensionality reduction (CMDR), termed DECMDR. DECMDR uses the CMDR as a fitness measure to evaluate values of solutions in DE process for scanning the potential statistical epistasis in GWAS. The results indicated that DECMDR outperforms the existing algorithms in terms of detection success rate by the large simulation and real data obtained from the Wellcome Trust Case Control Consortium. For running time comparison, DECMDR can efficient to apply the CMDR to detect the significant association between cases and controls amongst all possible SNP combinations in GWAS. DECMDR is freely available at https://goo.gl/p9sLuJ . chuang@isu.edu.tw or e0955767257@yahoo.com.tw. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. T-wave end detection using neural networks and Support Vector Machines.

    PubMed

    Suárez-León, Alexander Alexeis; Varon, Carolina; Willems, Rik; Van Huffel, Sabine; Vázquez-Seisdedos, Carlos Román

    2018-05-01

    In this paper we propose a new approach for detecting the end of the T-wave in the electrocardiogram (ECG) using Neural Networks and Support Vector Machines. Both, Multilayer Perceptron (MLP) neural networks and Fixed-Size Least-Squares Support Vector Machines (FS-LSSVM) were used as regression algorithms to determine the end of the T-wave. Different strategies for selecting the training set such as random selection, k-means, robust clustering and maximum quadratic (Rényi) entropy were evaluated. Individual parameters were tuned for each method during training and the results are given for the evaluation set. A comparison between MLP and FS-LSSVM approaches was performed. Finally, a fair comparison of the FS-LSSVM method with other state-of-the-art algorithms for detecting the end of the T-wave was included. The experimental results show that FS-LSSVM approaches are more suitable as regression algorithms than MLP neural networks. Despite the small training sets used, the FS-LSSVM methods outperformed the state-of-the-art techniques. FS-LSSVM can be successfully used as a T-wave end detection algorithm in ECG even with small training set sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Detection of illicit substances in fingerprints by infrared spectral imaging.

    PubMed

    Ng, Ping Hei Ronnie; Walker, Sarah; Tahtouh, Mark; Reedy, Brian

    2009-08-01

    FTIR and Raman spectral imaging can be used to simultaneously image a latent fingerprint and detect exogenous substances deposited within it. These substances might include drugs of abuse or traces of explosives or gunshot residue. In this work, spectral searching algorithms were tested for their efficacy in finding targeted substances deposited within fingerprints. "Reverse" library searching, where a large number of possibly poor-quality spectra from a spectral image are searched against a small number of high-quality reference spectra, poses problems for common search algorithms as they are usually implemented. Out of a range of algorithms which included conventional Euclidean distance searching, the spectral angle mapper (SAM) and correlation algorithms gave the best results when used with second-derivative image and reference spectra. All methods tested gave poorer performances with first derivative and undifferentiated spectra. In a search against a caffeine reference, the SAM and correlation methods were able to correctly rank a set of 40 confirmed but poor-quality caffeine spectra at the top of a dataset which also contained 4,096 spectra from an image of an uncontaminated latent fingerprint. These methods also successfully and individually detected aspirin, diazepam and caffeine that had been deposited together in another fingerprint, and they did not indicate any of these substances as a match in a search for another substance which was known not to be present. The SAM was used to successfully locate explosive components in fingerprints deposited on silicon windows. The potential of other spectral searching algorithms used in the field of remote sensing is considered, and the applicability of the methods tested in this work to other modes of spectral imaging is discussed.

  15. Products recognition on shop-racks from local scale-invariant features

    NASA Astrophysics Data System (ADS)

    Zawistowski, Jacek; Kurzejamski, Grzegorz; Garbat, Piotr; Naruniec, Jacek

    2016-04-01

    This paper presents a system designed for the multi-object detection purposes and adjusted for the application of product search on the market shelves. System uses well known binary keypoint detection algorithms for finding characteristic points in the image. One of the main idea is object recognition based on Implicit Shape Model method. Authors of the article proposed many improvements of the algorithm. Originally fiducial points are matched with a very simple function. This leads to the limitations in the number of objects parts being success- fully separated, while various methods of classification may be validated in order to achieve higher performance. Such an extension implies research on training procedure able to deal with many objects categories. Proposed solution opens a new possibilities for many algorithms demanding fast and robust multi-object recognition.

  16. Integrating Oil Debris and Vibration Measurements for Intelligent Machine Health Monitoring. Degree awarded by Toledo Univ., May 2002

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.

    2003-01-01

    A diagnostic tool for detecting damage to gears was developed. Two different measurement technologies, oil debris analysis and vibration were integrated into a health monitoring system for detecting surface fatigue pitting damage on gears. This integrated system showed improved detection and decision-making capabilities as compared to using individual measurement technologies. This diagnostic tool was developed and evaluated experimentally by collecting vibration and oil debris data from fatigue tests performed in the NASA Glenn Spur Gear Fatigue Rig. An oil debris sensor and the two vibration algorithms were adapted as the diagnostic tools. An inductance type oil debris sensor was selected for the oil analysis measurement technology. Gear damage data for this type of sensor was limited to data collected in the NASA Glenn test rigs. For this reason, this analysis included development of a parameter for detecting gear pitting damage using this type of sensor. The vibration data was used to calculate two previously available gear vibration diagnostic algorithms. The two vibration algorithms were selected based on their maturity and published success in detecting damage to gears. Oil debris and vibration features were then developed using fuzzy logic analysis techniques, then input into a multi sensor data fusion process. Results show combining the vibration and oil debris measurement technologies improves the detection of pitting damage on spur gears. As a result of this research, this new diagnostic tool has significantly improved detection of gear damage in the NASA Glenn Spur Gear Fatigue Rigs. This research also resulted in several other findings that will improve the development of future health monitoring systems. Oil debris analysis was found to be more reliable than vibration analysis for detecting pitting fatigue failure of gears and is capable of indicating damage progression. Also, some vibration algorithms are as sensitive to operational effects as they are to damage. Another finding was that clear threshold limits must be established for diagnostic tools. Based on additional experimental data obtained from the NASA Glenn Spiral Bevel Gear Fatigue Rig, the methodology developed in this study can be successfully implemented on other geared systems.

  17. Artificial Neural Network applied to lightning flashes

    NASA Astrophysics Data System (ADS)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a success rate of 90%. The videos used in this experiment were acquired by seven video cameras installed in São Bernardo do Campo, Brazil, that continuously recorded lightning events during the summer. The cameras were disposed in a 360 loop, recording all data at a time resolution of 33ms. During this period, several convective storms were recorded.

  18. Real-time distributed fiber optic sensor for security systems: Performance, event classification and nuisance mitigation

    NASA Astrophysics Data System (ADS)

    Mahmoud, Seedahmed S.; Visagathilagar, Yuvaraja; Katsifolis, Jim

    2012-09-01

    The success of any perimeter intrusion detection system depends on three important performance parameters: the probability of detection (POD), the nuisance alarm rate (NAR), and the false alarm rate (FAR). The most fundamental parameter, POD, is normally related to a number of factors such as the event of interest, the sensitivity of the sensor, the installation quality of the system, and the reliability of the sensing equipment. The suppression of nuisance alarms without degrading sensitivity in fiber optic intrusion detection systems is key to maintaining acceptable performance. Signal processing algorithms that maintain the POD and eliminate nuisance alarms are crucial for achieving this. In this paper, a robust event classification system using supervised neural networks together with a level crossings (LCs) based feature extraction algorithm is presented for the detection and recognition of intrusion and non-intrusion events in a fence-based fiber-optic intrusion detection system. A level crossings algorithm is also used with a dynamic threshold to suppress torrential rain-induced nuisance alarms in a fence system. Results show that rain-induced nuisance alarms can be suppressed for rainfall rates in excess of 100 mm/hr with the simultaneous detection of intrusion events. The use of a level crossing based detection and novel classification algorithm is also presented for a buried pipeline fiber optic intrusion detection system for the suppression of nuisance events and discrimination of intrusion events. The sensor employed for both types of systems is a distributed bidirectional fiber-optic Mach-Zehnder (MZ) interferometer.

  19. Memetic algorithms for de novo motif-finding in biomedical sequences.

    PubMed

    Bi, Chengpeng

    2012-09-01

    The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary microRNA sequences. The memetic motif-finding algorithm is effectively designed and implemented, and its applications demonstrate it is not only time-efficient, but also exhibits excellent performance while compared with other popular algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  1. Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM

    NASA Technical Reports Server (NTRS)

    Shultz, Chris; Petersen, Walter; Carey, Lawrence

    2011-01-01

    Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.

  2. DCL System Using Deep Learning Approaches for Land-Based or Ship-Based Real-Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2013-09-30

    method has been successfully implemented to automatically detect and recognize pulse trains from minke whales ( songs ) and sperm whales (Physeter...workshops, conferences and data challenges 2. Enhancements of the ASR algorithm for frequency-modulated sounds: Right Whale Study 3...Enhancements of the ASR algorithm for pulse trains: Minke Whale Study 4. Mining Big Data Sound Archives using High Performance Computing software and hardware

  3. Automatic detection of typical dust devils from Mars landscape images

    NASA Astrophysics Data System (ADS)

    Ogohara, Kazunori; Watanabe, Takeru; Okumura, Susumu; Hatanaka, Yuji

    2018-02-01

    This paper presents an improved algorithm for automatic detection of Martian dust devils that successfully extracts tiny bright dust devils and obscured large dust devils from two subtracted landscape images. These dust devils are frequently observed using visible cameras onboard landers or rovers. Nevertheless, previous research on automated detection of dust devils has not focused on these common types of dust devils, but on dust devils that appear on images to be irregularly bright and large. In this study, we detect these common dust devils automatically using two kinds of parameter sets for thresholding when binarizing subtracted images. We automatically extract dust devils from 266 images taken by the Spirit rover to evaluate our algorithm. Taking dust devils detected by visual inspection to be ground truth, the precision, recall and F-measure values are 0.77, 0.86, and 0.81, respectively.

  4. Delamination Detection Using Guided Wave Phased Arrays

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Yu, Lingyu; Leckey, Cara

    2016-01-01

    This paper presents a method for detecting multiple delaminations in composite laminates using non-contact phased arrays. The phased arrays are implemented with a non-contact scanning laser Doppler vibrometer (SLDV). The array imaging algorithm is performed in the frequency domain where both the guided wave dispersion effect and direction dependent wave properties are considered. By using the non-contact SLDV array with a frequency domain imaging algorithm, an intensity image of the composite plate can be generated for delamination detection. For the proof of concept, a laboratory test is performed using a non-contact phased array to detect two delaminations (created through quasi-static impact test) at different locations in a composite plate. Using the non-contact phased array and frequency domain imaging, the two impact-induced delaminations are successfully detected. This study shows that the non-contact phased array method is a potentially effective method for rapid delamination inspection in large composite structures.

  5. Algorithms Based on CWT and Classifiers to Control Cardiac Alterations and Stress Using an ECG and a SCR

    PubMed Central

    Villarejo, María Viqueira; Zapirain, Begoña García; Zorrilla, Amaia Méndez

    2013-01-01

    This paper presents the results of using a commercial pulsimeter as an electrocardiogram (ECG) for wireless detection of cardiac alterations and stress levels for home control. For these purposes, signal processing techniques (Continuous Wavelet Transform (CWT) and J48) have been used, respectively. The designed algorithm analyses the ECG signal and is able to detect the heart rate (99.42%), arrhythmia (93.48%) and extrasystoles (99.29%). The detection of stress level is complemented with Skin Conductance Response (SCR), whose success is 94.02%. The heart rate variability does not show added value to the stress detection in this case. With this pulsimeter, it is possible to prevent and detect anomalies for a non-intrusive way associated to a telemedicine system. It is also possible to use it during physical activity due to the fact the CWT minimizes the motion artifacts. PMID:23666135

  6. Algorithms based on CWT and classifiers to control cardiac alterations and stress using an ECG and a SCR.

    PubMed

    Villarejo, María Viqueira; Zapirain, Begoña García; Zorrilla, Amaia Méndez

    2013-05-10

    This paper presents the results of using a commercial pulsimeter as an electrocardiogram (ECG) for wireless detection of cardiac alterations and stress levels for home control. For these purposes, signal processing techniques (Continuous Wavelet Transform (CWT) and J48) have been used, respectively. The designed algorithm analyses the ECG signal and is able to detect the heart rate (99.42%), arrhythmia (93.48%) and extrasystoles (99.29%). The detection of stress level is complemented with Skin Conductance Response (SCR), whose success is 94.02%. The heart rate variability does not show added value to the stress detection in this case. With this pulsimeter, it is possible to prevent and detect anomalies for a non-intrusive way associated to a telemedicine system. It is also possible to use it during physical activity due to the fact the CWT minimizes the motion artifacts.

  7. Sonar target enhancement by shrinkage of incoherent wavelet coefficients.

    PubMed

    Hunter, Alan J; van Vossen, Robbert

    2014-01-01

    Background reverberation can obscure useful features of the target echo response in broadband low-frequency sonar images, adversely affecting detection and classification performance. This paper describes a resolution and phase-preserving means of separating the target response from the background reverberation noise using a coherence-based wavelet shrinkage method proposed recently for de-noising magnetic resonance images. The algorithm weights the image wavelet coefficients in proportion to their coherence between different looks under the assumption that the target response is more coherent than the background. The algorithm is demonstrated successfully on experimental synthetic aperture sonar data from a broadband low-frequency sonar developed for buried object detection.

  8. Real-time detection of small and dim moving objects in IR video sequences using a robust background estimator and a noise-adaptive double thresholding

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2016-10-01

    We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.

  9. Successive Projections Algorithm-Multivariable Linear Regression Classifier for the Detection of Contaminants on Chicken Carcasses in Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Wu, W.; Chen, G. Y.; Kang, R.; Xia, J. C.; Huang, Y. P.; Chen, K. J.

    2017-07-01

    During slaughtering and further processing, chicken carcasses are inevitably contaminated by microbial pathogen contaminants. Due to food safety concerns, many countries implement a zero-tolerance policy that forbids the placement of visibly contaminated carcasses in ice-water chiller tanks during processing. Manual detection of contaminants is labor consuming and imprecise. Here, a successive projections algorithm (SPA)-multivariable linear regression (MLR) classifier based on an optimal performance threshold was developed for automatic detection of contaminants on chicken carcasses. Hyperspectral images were obtained using a hyperspectral imaging system. A regression model of the classifier was established by MLR based on twelve characteristic wavelengths (505, 537, 561, 562, 564, 575, 604, 627, 656, 665, 670, and 689 nm) selected by SPA , and the optimal threshold T = 1 was obtained from the receiver operating characteristic (ROC) analysis. The SPA-MLR classifier provided the best detection results when compared with the SPA-partial least squares (PLS) regression classifier and the SPA-least squares supported vector machine (LS-SVM) classifier. The true positive rate (TPR) of 100% and the false positive rate (FPR) of 0.392% indicate that the SPA-MLR classifier can utilize spatial and spectral information to effectively detect contaminants on chicken carcasses.

  10. Using a Portfolio of Algorithms for Planning and Scheduling

    NASA Technical Reports Server (NTRS)

    Sherwood, Robert; Knight, Russell; Rabideau, Gregg; Chien, Steve; Tran, Daniel; Engelhardt, Barbara

    2003-01-01

    The Automated Scheduling and Planning Environment (ASPEN) software system, aspects of which have been reported in several previous NASA Tech Briefs articles, includes a subsystem that utilizes a portfolio of heuristic algorithms that work synergistically to solve problems. The nature of the synergy of the specific algorithms is that their likelihoods of success are negatively correlated: that is, when a combination of them is used to solve a problem, the probability that at least one of them will succeed is greater than the sum of probabilities of success of the individual algorithms operating independently of each other. In ASPEN, the portfolio of algorithms is used in a planning process of the iterative repair type, in which conflicts are detected and addressed one at a time until either no conflicts exist or a user-defined time limit has been exceeded. At each choice point (e.g., selection of conflict; selection of method of resolution of conflict; or choice of move, addition, or deletion) ASPEN makes a stochastic choice of a combination of algorithms from the portfolio. This approach makes it possible for the search to escape from looping and from solutions that are locally but not globally optimum.

  11. Dependency of human target detection performance on clutter and quality of supporting image analysis algorithms in a video surveillance task

    NASA Astrophysics Data System (ADS)

    Huber, Samuel; Dunau, Patrick; Wellig, Peter; Stein, Karin

    2017-10-01

    Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.

  12. Automated feature extraction in color retinal images by a model based approach.

    PubMed

    Li, Huiqi; Chutatape, Opas

    2004-02-01

    Color retinal photography is an important tool to detect the evidence of various eye diseases. Novel methods to extract the main features in color retinal images have been developed in this paper. Principal component analysis is employed to locate optic disk; A modified active shape model is proposed in the shape detection of optic disk; A fundus coordinate system is established to provide a better description of the features in the retinal images; An approach to detect exudates by the combined region growing and edge detection is proposed. The success rates of disk localization, disk boundary detection, and fovea localization are 99%, 94%, and 100%, respectively. The sensitivity and specificity of exudate detection are 100% and 71%, correspondingly. The success of the proposed algorithms can be attributed to the utilization of the model-based methods. The detection and analysis could be applied to automatic mass screening and diagnosis of the retinal diseases.

  13. Automatic near-real-time detection of CMEs in Mauna Loa K-Cor coronagraph images

    NASA Astrophysics Data System (ADS)

    Thompson, William T.; St. Cyr, Orville Chris; Burkepile, Joan; Posner, Arik

    2017-08-01

    A simple algorithm has been developed to detect the onset of coronal mass ejections (CMEs), together with an estimate of their speed, in near-real-time using images of the linearly polarized white-light solar corona taken by the K-Cor telescope at the Mauna Loa Solar Observatory (MLSO). The algorithm used is a variation on the Solar Eruptive Event Detection System (SEEDS) developed at George Mason University. The algorithm was tested against K-Cor data taken between 29 April 2014 and 20 February 2017, on days which the MLSO website marked as containing CMEs. This resulted in testing of 139 days worth of data containing 171 CMEs. The detection rate varied from close to 80% in 2014-2015 when solar activity was high, down to as low as 20-30% in 2017 when activity was low. The difference in effectiveness with solar cycle is attributed to the difference in relative prevalance of strong CMEs between active and quiet periods. There were also twelve false detections during this time period, leading to an average false detection rate of 8.6% on any given day. However, half of the false detections were clustered into two short periods of a few days each when special conditions prevailed to increase the false detection rate. The K-Cor data were also compared with major Solar Energetic Particle (SEP) storms during this time period. There were three SEP events detected either at Earth or at one of the two STEREO spacecraft where K-Cor was observing during the relevant time period. The K-Cor CME detection algorithm successfully generated alerts for two of these events, with lead times of 1-3 hours before the SEP onset at 1 AU. The third event was not detected by the automatic algorithm because of the unusually broad width of the CME in position angle.

  14. The combined use of the RST-FIRES algorithm and geostationary satellite data to timely detect fires

    NASA Astrophysics Data System (ADS)

    Filizzola, Carolina; Corrado, Rosita; Marchese, Francesco; Mazzeo, Giuseppe; Paciello, Rossana; Pergola, Nicola; Tramutoli, Valerio

    2017-04-01

    Timely detection of fires may enable a rapid contrast action before they become uncontrolled and wipe out entire forests. Remote sensing, especially based on geostationary satellite data, can be successfully used to this aim. Differently from sensors onboard polar orbiting platforms, instruments on geostationary satellites guarantee a very high temporal resolution (from 30 to 2,5 minutes) which may be usefully employed to carry out a "continuous" monitoring over large areas as well as to timely detect fires at their early stages. Together with adequate satellite data, an appropriate fire detection algorithm should be used. Over the last years, many fire detection algorithms have been just adapted from polar to geostationary sensors and, consequently, the very high temporal resolution of geostationary sensors is not exploited at all in tests for fire identification. In addition, even when specifically designed for geostationary satellite sensors, fire detection algorithms are frequently based on fixed thresholds tests which are generally set up in the most conservative way to avoid false alarm proliferation. The result is a low algorithm sensitivity which generally means that only large and/or extremely intense events are detected. This work describes the Robust Satellite Techniques for FIRES detection and monitoring (RST-FIRES) which is a multi-temporal change-detection technique trying to overcome the above mentioned issues. Its performance in terms of reliability and sensitivity was verified using data acquired by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) sensor onboard the Meteosat Second Generation (MSG) geostationary platform. More than 20,000 SEVIRI images, collected during a four-year-collaboration with the Regional Civil Protection Departments and Local Authorities of two Italian regions, were used. About 950 near real-time ground and aerial checks of the RST-FIRES detections were performed. This study also demonstrates the added value of the RST-FIRES technique to detect starting/small fires and its sensitivity from 3 to 70 times higher than any other similar SEVIRI-based products.

  15. Detection and clustering of features in aerial images by neuron network-based algorithm

    NASA Astrophysics Data System (ADS)

    Vozenilek, Vit

    2015-12-01

    The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.

  16. Algorithm Summary and Evaluation: Automatic Implementation of Ringdown Analysis for Electromechanical Mode Identification from Phasor Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.

    2010-02-28

    Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper develops a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detecting ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliablymore » and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis. In addition, the proposed method is applied to field measurement data from WECC to show the performance of the proposed algorithm.« less

  17. Improvements in Night-Time Low Cloud Detection and MODIS-Style Cloud Optical Properties from MSG SEVIRI

    NASA Technical Reports Server (NTRS)

    Wind, Galina (Gala); Platnick, Steven; Riedi, Jerome

    2011-01-01

    The MODIS cloud optical properties algorithm (MOD06IMYD06 for Terra and Aqua MODIS, respectively) slated for production in Data Collection 6 has been adapted to execute using available channels on MSG SEVIRI. Available MODIS-style retrievals include IR Window-derived cloud top properties, using the new Collection 6 cloud top properties algorithm, cloud optical thickness from VISINIR bands, cloud effective radius from 1.6 and 3.7Jlm and cloud ice/water path. We also provide pixel-level uncertainty estimate for successful retrievals. It was found that at nighttime the SEVIRI cloud mask tends to report unnaturally low cloud fraction for marine stratocumulus clouds. A correction algorithm that improves detection of such clouds has been developed. We will discuss the improvements to nighttime low cloud detection for SEVIRI and show examples and comparisons with MODIS and CALIPSO. We will also show examples of MODIS-style pixel-level (Level-2) cloud retrievals for SEVIRI with comparisons to MODIS.

  18. Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF

    NASA Astrophysics Data System (ADS)

    Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James

    A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.

  19. Automatic detection and classification of damage zone(s) for incorporating in digital image correlation technique

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Sudipta; Deb, Debasis

    2016-07-01

    Digital image correlation (DIC) is a technique developed for monitoring surface deformation/displacement of an object under loading conditions. This method is further refined to make it capable of handling discontinuities on the surface of the sample. A damage zone is referred to a surface area fractured and opened in due course of loading. In this study, an algorithm is presented to automatically detect multiple damage zones in deformed image. The algorithm identifies the pixels located inside these zones and eliminate them from FEM-DIC processes. The proposed algorithm is successfully implemented on several damaged samples to estimate displacement fields of an object under loading conditions. This study shows that displacement fields represent the damage conditions reasonably well as compared to regular FEM-DIC technique without considering the damage zones.

  20. Learning Cue Phrase Patterns from Radiology Reports Using a Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Robert M; Beckerman, Barbara G; Potok, Thomas E

    2009-01-01

    Various computer-assisted technologies have been developed to assist radiologists in detecting cancer; however, the algorithms still lack high degrees of sensitivity and specificity, and must undergo machine learning against a training set with known pathologies in order to further refine the algorithms with higher validity of truth. This work describes an approach to learning cue phrase patterns in radiology reports that utilizes a genetic algorithm (GA) as the learning method. The approach described here successfully learned cue phrase patterns for two distinct classes of radiology reports. These patterns can then be used as a basis for automatically categorizing, clustering, ormore » retrieving relevant data for the user.« less

  1. Using Machine Learning for Advanced Anomaly Detection and Classification

    NASA Astrophysics Data System (ADS)

    Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J.

    2016-09-01

    Machine Learning (ML) techniques have successfully been used in a wide variety of applications to automatically detect and potentially classify changes in activity, or a series of activities by utilizing large amounts data, sometimes even seemingly-unrelated data. The amount of data being collected, processed, and stored in the Space Situational Awareness (SSA) domain has grown at an exponential rate and is now better suited for ML. This paper describes development of advanced algorithms to deliver significant improvements in characterization of deep space objects and indication and warning (I&W) using a global network of telescopes that are collecting photometric data on a multitude of space-based objects. The Phase II Air Force Research Laboratory (AFRL) Small Business Innovative Research (SBIR) project Autonomous Characterization Algorithms for Change Detection and Characterization (ACDC), contracted to ExoAnalytic Solutions Inc. is providing the ability to detect and identify photometric signature changes due to potential space object changes (e.g. stability, tumble rate, aspect ratio), and correlate observed changes to potential behavioral changes using a variety of techniques, including supervised learning. Furthermore, these algorithms run in real-time on data being collected and processed by the ExoAnalytic Space Operations Center (EspOC), providing timely alerts and warnings while dynamically creating collection requirements to the EspOC for the algorithms that generate higher fidelity I&W. This paper will discuss the recently implemented ACDC algorithms, including the general design approach and results to date. The usage of supervised algorithms, such as Support Vector Machines, Neural Networks, k-Nearest Neighbors, etc., and unsupervised algorithms, for example k-means, Principle Component Analysis, Hierarchical Clustering, etc., and the implementations of these algorithms is explored. Results of applying these algorithms to EspOC data both in an off-line "pattern of life" analysis as well as using the algorithms on-line in real-time, meaning as data is collected, will be presented. Finally, future work in applying ML for SSA will be discussed.

  2. Retention time alignment of LC/MS data by a divide-and-conquer algorithm.

    PubMed

    Zhang, Zhongqi

    2012-04-01

    Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.

  3. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis.

    PubMed

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control.

  4. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis

    PubMed Central

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  5. Algorithm-based arterial blood sampling recognition increasing safety in point-of-care diagnostics.

    PubMed

    Peter, Jörg; Klingert, Wilfried; Klingert, Kathrin; Thiel, Karolin; Wulff, Daniel; Königsrainer, Alfred; Rosenstiel, Wolfgang; Schenk, Martin

    2017-08-04

    To detect blood withdrawal for patients with arterial blood pressure monitoring to increase patient safety and provide better sample dating. Blood pressure information obtained from a patient monitor was fed as a real-time data stream to an experimental medical framework. This framework was connected to an analytical application which observes changes in systolic, diastolic and mean pressure to determine anomalies in the continuous data stream. Detection was based on an increased mean blood pressure caused by the closing of the withdrawal three-way tap and an absence of systolic and diastolic measurements during this manipulation. For evaluation of the proposed algorithm, measured data from animal studies in healthy pigs were used. Using this novel approach for processing real-time measurement data of arterial pressure monitoring, the exact time of blood withdrawal could be successfully detected retrospectively and in real-time. The algorithm was able to detect 422 of 434 (97%) blood withdrawals for blood gas analysis in the retrospective analysis of 7 study trials. Additionally, 64 sampling events for other procedures like laboratory and activated clotting time analyses were detected. The proposed algorithm achieved a sensitivity of 0.97, a precision of 0.96 and an F1 score of 0.97. Arterial blood pressure monitoring data can be used to perform an accurate identification of individual blood samplings in order to reduce sample mix-ups and thereby increase patient safety.

  6. Automatic Near-Real-Time Detection of CMEs in Mauna Loa K-Cor Coronagraph Images

    NASA Astrophysics Data System (ADS)

    Thompson, W. T.; St. Cyr, O. C.; Burkepile, J. T.; Posner, A.

    2017-10-01

    A simple algorithm has been developed to detect the onset of coronal mass ejections (CMEs), together with speed estimates, in near-real time using linearly polarized white-light solar coronal images from the Mauna Loa Solar Observatory K-Cor telescope. Ground observations in the low corona can warn of CMEs well before they appear in space coronagraphs. The algorithm used is a variation on the Solar Eruptive Event Detection System developed at George Mason University. It was tested against K-Cor data taken between 29 April 2014 and 20 February 2017, on days identified as containing CMEs. This resulted in testing of 139 days' worth of data containing 171 CMEs. The detection rate varied from close to 80% when solar activity was high down to as low as 20-30% when activity was low. The difference in effectiveness with solar cycle is attributed to the relative prevalence of strong CMEs between active and quiet periods. There were also 12 false detections, leading to an average false detection rate of 8.6%. The K-Cor data were also compared with major solar energetic particle (SEP) storms during this time period. There were three SEP events detected either at Earth or at one of the two STEREO spacecraft when K-Cor was observing during the relevant time period. The algorithm successfully generated alerts for two of these events, with lead times of 1-3 h before the SEP onset at 1 AU. The third event was not detected by the automatic algorithm because of the unusually broad width in position angle.

  7. Design and testing of artifact-suppressed adaptive histogram equalization: a contrast-enhancement technique for display of digital chest radiographs.

    PubMed

    Rehm, K; Seeley, G W; Dallas, W J; Ovitt, T W; Seeger, J F

    1990-01-01

    One of the goals of our research in the field of digital radiography has been to develop contrast-enhancement algorithms for eventual use in the display of chest images on video devices with the aim of preserving the diagnostic information presently available with film, some of which would normally be lost because of the smaller dynamic range of video monitors. The ASAHE algorithm discussed in this article has been tested by investigating observer performance in a difficult detection task involving phantoms and simulated lung nodules, using film as the output medium. The results of the experiment showed that the algorithm is successful in providing contrast-enhanced, natural-looking chest images while maintaining diagnostic information. The algorithm did not effect an increase in nodule detectability, but this was not unexpected because film is a medium capable of displaying a wide range of gray levels. It is sufficient at this stage to show that there is no degradation in observer performance. Future tests will evaluate the performance of the ASAHE algorithm in preparing chest images for video display.

  8. Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael

    2009-01-01

    Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.

  9. Fast Human Detection for Intelligent Monitoring Using Surveillance Visible Sensors

    PubMed Central

    Ko, Byoung Chul; Jeong, Mira; Nam, JaeYeal

    2014-01-01

    Human detection using visible surveillance sensors is an important and challenging work for intruder detection and safety management. The biggest barrier of real-time human detection is the computational time required for dense image scaling and scanning windows extracted from an entire image. This paper proposes fast human detection by selecting optimal levels of image scale using each level's adaptive region-of-interest (ROI). To estimate the image-scaling level, we generate a Hough windows map (HWM) and select a few optimal image scales based on the strength of the HWM and the divide-and-conquer algorithm. Furthermore, adaptive ROIs are arranged per image scale to provide a different search area. We employ a cascade random forests classifier to separate candidate windows into human and nonhuman classes. The proposed algorithm has been successfully applied to real-world surveillance video sequences, and its detection accuracy and computational speed show a better performance than those of other related methods. PMID:25393782

  10. Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad

    2018-01-01

    The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.

  11. A multi-data source surveillance system to detect a bioterrorism attack during the G8 Summit in Scotland.

    PubMed

    Meyer, N; McMenamin, J; Robertson, C; Donaghy, M; Allardice, G; Cooper, D

    2008-07-01

    In 18 weeks, Health Protection Scotland (HPS) deployed a syndromic surveillance system to early-detect natural or intentional disease outbreaks during the G8 Summit 2005 at Gleneagles, Scotland. The system integrated clinical and non-clinical datasets. Clinical datasets included Accident & Emergency (A&E) syndromes, and General Practice (GPs) codes grouped into syndromes. Non-clinical data included telephone calls to a nurse helpline, laboratory test orders, and hotel staff absenteeism. A cumulative sum-based detection algorithm and a log-linear regression model identified signals in the data. The system had a fax-based track for real-time identification of unusual presentations. Ninety-five signals were triggered by the detection algorithms and four forms were faxed to HPS. Thirteen signals were investigated. The system successfully complemented a traditional surveillance system in identifying a small cluster of gastroenteritis among the police force and triggered interventions to prevent further cases.

  12. Extracting cardiac shapes and motion of the chick embryo heart outflow tract from four-dimensional optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Yin, Xin; Liu, Aiping; Thornburg, Kent L.; Wang, Ruikang K.; Rugonyi, Sandra

    2012-09-01

    Recent advances in optical coherence tomography (OCT), and the development of image reconstruction algorithms, enabled four-dimensional (4-D) (three-dimensional imaging over time) imaging of the embryonic heart. To further analyze and quantify the dynamics of cardiac beating, segmentation procedures that can extract the shape of the heart and its motion are needed. Most previous studies analyzed cardiac image sequences using manually extracted shapes and measurements. However, this is time consuming and subject to inter-operator variability. Automated or semi-automated analyses of 4-D cardiac OCT images, although very desirable, are also extremely challenging. This work proposes a robust algorithm to semi automatically detect and track cardiac tissue layers from 4-D OCT images of early (tubular) embryonic hearts. Our algorithm uses a two-dimensional (2-D) deformable double-line model (DLM) to detect target cardiac tissues. The detection algorithm uses a maximum-likelihood estimator and was successfully applied to 4-D in vivo OCT images of the heart outflow tract of day three chicken embryos. The extracted shapes captured the dynamics of the chick embryonic heart outflow tract wall, enabling further analysis of cardiac motion.

  13. Estimating the coordinates of pillars and posts in the parking lots for intelligent parking assist system

    NASA Astrophysics Data System (ADS)

    Choi, Jae Hyung; Kuk, Jung Gap; Kim, Young Il; Cho, Nam Ik

    2012-01-01

    This paper proposes an algorithm for the detection of pillars or posts in the video captured by a single camera implemented on the fore side of a room mirror in a car. The main purpose of this algorithm is to complement the weakness of current ultrasonic parking assist system, which does not well find the exact position of pillars or does not recognize narrow posts. The proposed algorithm is consisted of three steps: straight line detection, line tracking, and the estimation of 3D position of pillars. In the first step, the strong lines are found by the Hough transform. Second step is the combination of detection and tracking, and the third is the calculation of 3D position of the line by the analysis of trajectory of relative positions and the parameters of camera. Experiments on synthetic and real images show that the proposed method successfully locates and tracks the position of pillars, which helps the ultrasonic system to correctly locate the edges of pillars. It is believed that the proposed algorithm can also be employed as a basic element for vision based autonomous driving system.

  14. Secure Localization in the Presence of Colluders in WSNs

    PubMed Central

    Barbeau, Michel; Corriveau, Jean-Pierre; Garcia-Alfaro, Joaquin; Yao, Meng

    2017-01-01

    We address the challenge of correctly estimating the position of wireless sensor network (WSN) nodes in the presence of malicious adversaries. We consider adversarial situations during the execution of node localization under three classes of colluding adversaries. We describe a decentralized algorithm that aims at determining the position of nodes in the presence of such colluders. Colluders are assumed to either forge or manipulate the information they exchange with the other nodes of the WSN. This algorithm allows location-unknown nodes to successfully detect adversaries within their communication range. Numeric simulation is reported to validate the approach. Results show the validity of the proposal, both in terms of localization and adversary detection. PMID:28817077

  15. ModuleMiner - improved computational detection of cis-regulatory modules: are there different modes of gene regulation in embryonic development and adult tissues?

    PubMed Central

    Van Loo, Peter; Aerts, Stein; Thienpont, Bernard; De Moor, Bart; Moreau, Yves; Marynen, Peter

    2008-01-01

    We present ModuleMiner, a novel algorithm for computationally detecting cis-regulatory modules (CRMs) in a set of co-expressed genes. ModuleMiner outperforms other methods for CRM detection on benchmark data, and successfully detects CRMs in tissue-specific microarray clusters and in embryonic development gene sets. Interestingly, CRM predictions for differentiated tissues exhibit strong enrichment close to the transcription start site, whereas CRM predictions for embryonic development gene sets are depleted in this region. PMID:18394174

  16. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  17. Waveform Similarity Analysis: A Simple Template Comparing Approach for Detecting and Quantifying Noisy Evoked Compound Action Potentials.

    PubMed

    Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira

    2015-01-01

    Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies.

  18. Detection of convective initiation using Meteosat SEVIRI: implementation in and verification with the tracking and nowcasting algorithm Cb-TRAM

    NASA Astrophysics Data System (ADS)

    Merk, D.; Zinner, T.

    2013-02-01

    In this paper a new detection scheme for Convective Initation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting Convective Initation with geostationary satellite data and uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one High Resolution Visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims for identifying the typical development of quickly developing convective cells in an early stage. The different criteria include timetrends of the 10.8 IR channel and IR channel differences as well as their timetrends. To provide the trend fields an optical flow based method is used, the Pyramidal Matching algorithm which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM and is verified for seven days which comprise different weather situations in Central Europe. Contrasted with the original early stage detection scheme of Cb-TRAM skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for synoptic conditions with upper cold air masses triggering convection.

  19. Detection of convective initiation using Meteosat SEVIRI: implementation in and verification with the tracking and nowcasting algorithm Cb-TRAM

    NASA Astrophysics Data System (ADS)

    Merk, D.; Zinner, T.

    2013-08-01

    In this paper a new detection scheme for convective initiation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting CI with geostationary satellite data. It uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared (IR) criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one high-resolution visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims to identify the typical development of quickly developing convective cells in an early stage. The different criteria include time trends of the 10.8 IR channel, and IR channel differences, as well as their time trends. To provide the trend fields an optical-flow-based method is used: the pyramidal matching algorithm, which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM, and is verified for seven days which comprise different weather situations in central Europe. Contrasted with the original early-stage detection scheme of Cb-TRAM, skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for the synoptic class "cold air" masses.

  20. Waveform Similarity Analysis: A Simple Template Comparing Approach for Detecting and Quantifying Noisy Evoked Compound Action Potentials

    PubMed Central

    Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira

    2015-01-01

    Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies. PMID:26325291

  1. A particle filter for multi-target tracking in track before detect context

    NASA Astrophysics Data System (ADS)

    Amrouche, Naima; Khenchaf, Ali; Berkani, Daoud

    2016-10-01

    The track-before-detect (TBD) approach can be used to track a single target in a highly noisy radar scene. This is because it makes use of unthresholded observations and incorporates a binary target existence variable into its target state estimation process when implemented as a particle filter (PF). This paper proposes the recursive PF-TBD approach to detect multiple targets in low-signal-to noise ratios (SNR). The algorithm's successful performance is demonstrated using a simulated two target example.

  2. Researches on Position Detection for Vacuum Switch Electrode

    NASA Astrophysics Data System (ADS)

    Dong, Huajun; Guo, Yingjie; Li, Jie; Kong, Yihan

    2018-03-01

    Form and transformation character of vacuum arc is important influencing factor on the vacuum switch performance, and the dynamic separations of electrode is the chief effecting factor on the transformation of vacuum arcs forms. Consequently, how to detect the position of electrode to calculate the separations in the arcs image is of great significance. However, gray level distribution of vacuum arcs image isn’t even, the gray level of burning arcs is high, but the gray level of electrode is low, meanwhile, the forms of vacuum arcs changes sharply, the problems above restrict electrode position detection precisely. In this paper, algorithm of detecting electrode position base on vacuum arcs image was proposed. The digital image processing technology was used in vacuum switch arcs image analysis, the upper edge and lower edge were detected respectively, then linear fitting was done using the result of edge detection, the fitting result was the position of electrode, thus, accurate position detection of electrode was realized. From the experimental results, we can see that: algorithm described in this paper detected upper and lower edge of arcs successfully and the position of electrode was obtained through calculation.

  3. Visual Sensor Based Abnormal Event Detection with Moving Shadow Removal in Home Healthcare Applications

    PubMed Central

    Lee, Young-Sook; Chung, Wan-Young

    2012-01-01

    Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486

  4. Stereo-vision-based terrain mapping for off-road autonomous navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  5. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  6. Automated Processing of 2-D Gel Electrophoretograms of Genomic DNA for Hunting Pathogenic DNA Molecular Changes.

    PubMed

    Takahashi; Nakazawa; Watanabe; Konagaya

    1999-01-01

    We have developed the automated processing algorithms for 2-dimensional (2-D) electrophoretograms of genomic DNA based on RLGS (Restriction Landmark Genomic Scanning) method, which scans the restriction enzyme recognition sites as the landmark and maps them onto a 2-D electrophoresis gel. Our powerful processing algorithms realize the automated spot recognition from RLGS electrophoretograms and the automated comparison of a huge number of such images. In the final stage of the automated processing, a master spot pattern, on which all the spots in the RLGS images are mapped at once, can be obtained. The spot pattern variations which seemed to be specific to the pathogenic DNA molecular changes can be easily detected by simply looking over the master spot pattern. When we applied our algorithms to the analysis of 33 RLGS images derived from human colon tissues, we successfully detected several colon tumor specific spot pattern changes.

  7. Reducing the number of templates for aligned-spin compact binary coalescence gravitational wave searches using metric-agnostic template nudging

    NASA Astrophysics Data System (ADS)

    Indik, Nathaniel; Fehrmann, Henning; Harke, Franz; Krishnan, Badri; Nielsen, Alex B.

    2018-06-01

    Efficient multidimensional template placement is crucial in computationally intensive matched-filtering searches for gravitational waves (GWs). Here, we implement the neighboring cell algorithm (NCA) to improve the detection volume of an existing compact binary coalescence (CBC) template bank. This algorithm has already been successfully applied for a binary millisecond pulsar search in data from the Fermi satellite. It repositions templates from overdense regions to underdense regions and reduces the number of templates that would have been required by a stochastic method to achieve the same detection volume. Our method is readily generalizable to other CBC parameter spaces. Here we apply this method to the aligned-single-spin neutron star-black hole binary coalescence inspiral-merger-ringdown gravitational wave parameter space. We show that the template nudging algorithm can attain the equivalent effectualness of the stochastic method with 12% fewer templates.

  8. Geostationary Lightning Mapper: Lessons Learned from Post Launch Test

    NASA Astrophysics Data System (ADS)

    Edgington, S.; Tillier, C. E.; Demroff, H.; VanBezooijen, R.; Christian, H. J., Jr.; Bitzer, P. M.

    2017-12-01

    Pre-launch calibration and algorithm design for the GOES Geostationary Lightning Mapper resulted in a successful and trouble-free on-orbit activation and post-launch test sequence. Within minutes of opening the GLM aperture door on January 4th, 2017, lightning was detected across the entire field of view. During the six-month post-launch test period, numerous processing parameters on board the instrument and in the ground processing algorithms were fine-tuned. Demonstrated on-orbit performance exceeded pre-launch predictions. We provide an overview of the ground calibration sequence, on-orbit tuning of the instrument, tuning of the ground processing algorithms (event filtering and navigation). We also touch on new insights obtained from analysis of a large and growing archive of raw GLM data, containing 3e8 flash detections derived from over 1e10 full-disk images of the Earth.

  9. Automatic interpretation of oblique ionograms

    NASA Astrophysics Data System (ADS)

    Ippolito, Alessandro; Scotto, Carlo; Francis, Matthew; Settimi, Alessandro; Cesaroni, Claudio

    2015-03-01

    We present an algorithm for the identification of trace characteristics of oblique ionograms allowing determination of the Maximum Usable Frequency (MUF) for communication between the transmitter and receiver. The algorithm automatically detects and rejects poor quality ionograms. We performed an exploratory test of the algorithm using data from a campaign of oblique soundings between Rome, Italy (41.90 N, 12.48 E) and Chania, Greece (35.51 N, 24.01 E) and also between Kalkarindji, Australia (17.43 S, 130.81 E) and Culgoora, Australia (30.30 S, 149.55 E). The success of these tests demonstrates the applicability of the method to ionograms recorded by different ionosondes in various helio and geophysical conditions.

  10. Automatic atrial capture device control in real-life practice: A multicenter experience.

    PubMed

    Giammaria, Massimo; Quirino, Gianluca; Alberio, Mariangela; Parravicini, Umberto; Cipolla, Eliana; Rossetti, Guido; Ruocco, Antonio; Senatore, Gaetano; Rametta, Francesco; Pistelli, Paolo

    2017-04-01

    Device-based fully automatic pacing capture detection is useful in clinical practice and important in the era of remote care management. The main objective of this study was to verify the effectiveness of the new ACAP Confirm® algorithm in managing atrial capture in the medium term in comparison with early post-implantation testing. Data were collected from 318 patients (66% male; mean age, 73±10 years); 237 of these patients underwent device implantation and 81 box changes in 31 Italian hospitals. Atrial threshold measurements were taken manually and automatically at different pulse widths before discharge and during follow-up (7±2 months) examination. The algorithm worked as expected in 73% of cases, considering all performed tests. The success rate was 65% and 88% pre-discharge and during follow-up examination ( p <0.001), respectively, in patients who had undergone implantation. We did not detect any difference in the performance of the algorithm as a result of the type of atrial lead used. The success rate was 70% during pre-discharge testing in patients undergoing device replacement. Considering all examination types, manual and automatic measurements yielded threshold values of 1.07±0.47 V and 1.03±0.47 V at 0.2-ms pulse duration ( p =0.37); 0.66±0.37 V and 0.67±0.36 V at 0.4 ms ( p =0.42); and 0.5±0.28 V and 0.5±0.29 V at 1 ms ( p =0.32). The results show that the algorithm works before discharge, and its reliability increases over the medium term. The algorithm also proved accurate in detecting the atrial threshold automatically. The possibility of activating it does not seem to be influenced by the lead type used, but by the time from implantation.

  11. Long-term detection of Parkinsonian tremor activity from subthalamic nucleus local field potentials.

    PubMed

    Houston, Brady; Blumenfeld, Zack; Quinn, Emma; Bronte-Stewart, Helen; Chizeck, Howard

    2015-01-01

    Current deep brain stimulation paradigms deliver continuous stimulation to deep brain structures to ameliorate the symptoms of Parkinson's disease. This continuous stimulation has undesirable side effects and decreases the lifespan of the unit's battery, necessitating earlier replacement. A closed-loop deep brain stimulator that uses brain signals to determine when to deliver stimulation based on the occurrence of symptoms could potentially address these drawbacks of current technology. Attempts to detect Parkinsonian tremor using brain signals recorded during the implantation procedure have been successful. However, the ability of these methods to accurately detect tremor over extended periods of time is unknown. Here we use local field potentials recorded during a deep brain stimulation clinical follow-up visit 1 month after initial programming to build a tremor detection algorithm and use this algorithm to detect tremor in subsequent visits up to 8 months later. Using this method, we detected the occurrence of tremor with accuracies between 68-93%. These results demonstrate the potential of tremor detection methods for efficacious closed-loop deep brain stimulation over extended periods of time.

  12. A bioinspired collision detection algorithm for VLSI implementation

    NASA Astrophysics Data System (ADS)

    Cuadri, J.; Linan, G.; Stafford, R.; Keil, M. S.; Roca, E.

    2005-06-01

    In this paper a bioinspired algorithm for collision detection is proposed, based on previous models of the locust (Locusta migratoria) visual system reported by F.C. Rind and her group, in the University of Newcastle-upon-Tyne. The algorithm is suitable for VLSI implementation in standard CMOS technologies as a system-on-chip for automotive applications. The working principle of the algorithm is to process a video stream that represents the current scenario, and to fire an alarm whenever an object approaches on a collision course. Moreover, it establishes a scale of warning states, from no danger to collision alarm, depending on the activity detected in the current scenario. In the worst case, the minimum time before collision at which the model fires the collision alarm is 40 msec (1 frame before, at 25 frames per second). Since the average time to successfully fire an airbag system is 2 msec, even in the worst case, this algorithm would be very helpful to more efficiently arm the airbag system, or even take some kind of collision avoidance countermeasures. Furthermore, two additional modules have been included: a "Topological Feature Estimator" and an "Attention Focusing Algorithm". The former takes into account the shape of the approaching object to decide whether it is a person, a road line or a car. This helps to take more adequate countermeasures and to filter false alarms. The latter centres the processing power into the most active zones of the input frame, thus saving memory and processing time resources.

  13. Connecting a cognitive architecture to robotic perception

    NASA Astrophysics Data System (ADS)

    Kurup, Unmesh; Lebiere, Christian; Stentz, Anthony; Hebert, Martial

    2012-06-01

    We present an integrated architecture in which perception and cognition interact and provide information to each other leading to improved performance in real-world situations. Our system integrates the Felzenswalb et. al. object-detection algorithm with the ACT-R cognitive architecture. The targeted task is to predict and classify pedestrian behavior in a checkpoint scenario, most specifically to discriminate between normal versus checkpoint-avoiding behavior. The Felzenswalb algorithm is a learning-based algorithm for detecting and localizing objects in images. ACT-R is a cognitive architecture that has been successfully used to model human cognition with a high degree of fidelity on tasks ranging from basic decision-making to the control of complex systems such as driving or air traffic control. The Felzenswalb algorithm detects pedestrians in the image and provides ACT-R a set of features based primarily on their locations. ACT-R uses its pattern-matching capabilities, specifically its partial-matching and blending mechanisms, to track objects across multiple images and classify their behavior based on the sequence of observed features. ACT-R also provides feedback to the Felzenswalb algorithm in the form of expected object locations that allow the algorithm to eliminate false-positives and improve its overall performance. This capability is an instance of the benefits pursued in developing a richer interaction between bottom-up perceptual processes and top-down goal-directed cognition. We trained the system on individual behaviors (only one person in the scene) and evaluated its performance across single and multiple behavior sets.

  14. Online damage detection using recursive principal component analysis and recursive condition indicators

    NASA Astrophysics Data System (ADS)

    Krishnan, M.; Bhowmik, B.; Tiwari, A. K.; Hazra, B.

    2017-08-01

    In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using recursive principal component analysis (RPCA) in conjunction with online damage indicators is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal modes in online using the rank-one perturbation method, and subsequently utilized to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/nonlinear-states that indicate damage. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. An online condition indicator (CI) based on the L2 norm of the error between actual response and the response projected using recursive eigenvector matrix updates over successive iterations is proposed. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data. The proposed CI, named recursive residual error, is also adopted for simultaneous spatio-temporal damage detection. Numerical simulations performed on five-degree of freedom nonlinear system under white noise and El Centro excitations, with different levels of nonlinearity simulating the damage scenarios, demonstrate the robustness of the proposed algorithm. Successful results obtained from practical case studies involving experiments performed on a cantilever beam subjected to earthquake excitation, for full sensors and underdetermined cases; and data from recorded responses of the UCLA Factor building (full data and its subset) demonstrate the efficacy of the proposed methodology as an ideal candidate for real-time, reference free structural health monitoring.

  15. Development of a novel constellation based landmark detection algorithm

    NASA Astrophysics Data System (ADS)

    Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.

    2013-03-01

    Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.

  16. Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection.

    PubMed

    Urbanowicz, Ryan J; Kiralis, Jeff; Fisher, Jonathan M; Moore, Jason H

    2012-09-26

    Algorithms designed to detect complex genetic disease associations are initially evaluated using simulated datasets. Typical evaluations vary constraints that influence the correct detection of underlying models (i.e. number of loci, heritability, and minor allele frequency). Such studies neglect to account for model architecture (i.e. the unique specification and arrangement of penetrance values comprising the genetic model), which alone can influence the detectability of a model. In order to design a simulation study which efficiently takes architecture into account, a reliable metric is needed for model selection. We evaluate three metrics as predictors of relative model detection difficulty derived from previous works: (1) Penetrance table variance (PTV), (2) customized odds ratio (COR), and (3) our own Ease of Detection Measure (EDM), calculated from the penetrance values and respective genotype frequencies of each simulated genetic model. We evaluate the reliability of these metrics across three very different data search algorithms, each with the capacity to detect epistatic interactions. We find that a model's EDM and COR are each stronger predictors of model detection success than heritability. This study formally identifies and evaluates metrics which quantify model detection difficulty. We utilize these metrics to intelligently select models from a population of potential architectures. This allows for an improved simulation study design which accounts for differences in detection difficulty attributed to model architecture. We implement the calculation and utilization of EDM and COR into GAMETES, an algorithm which rapidly and precisely generates pure, strict, n-locus epistatic models.

  17. Novel algorithm for simultaneous component detection and pseudo-molecular ion characterization in liquid chromatography-mass spectrometry.

    PubMed

    Zhang, Yufeng; Wang, Xiaoan; Wo, Siukwan; Ho, Hingman; Han, Quanbin; Fan, Xiaohui; Zuo, Zhong

    2015-01-01

    Resolving components and determining their pseudo-molecular ions (PMIs) are crucial steps in identifying complex herbal mixtures by liquid chromatography-mass spectrometry. To tackle such labor-intensive steps, we present here a novel algorithm for simultaneous detection of components and their PMIs. Our method consists of three steps: (1) obtaining a simplified dataset containing only mono-isotopic masses by removal of background noise and isotopic cluster ions based on the isotopic distribution model derived from all the reported natural compounds in dictionary of natural products; (2) stepwise resolving and removing all features of the highest abundant component from current simplified dataset and calculating PMI of each component according to an adduct-ion model, in which all non-fragment ions in a mass spectrum are considered as PMI plus one or several neutral species; (3) visual classification of detected components by principal component analysis (PCA) to exclude possible non-natural compounds (such as pharmaceutical excipients). This algorithm has been successfully applied to a standard mixture and three herbal extract/preparations. It indicated that our algorithm could detect components' features as a whole and report their PMI with an accuracy of more than 98%. Furthermore, components originated from excipients/contaminants could be easily separated from those natural components in the bi-plots of PCA. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is used to assess the performance of the algorithm. In the second application, a visible imager operated in sidereal mode observes geostationary objects as moving, stars as fixed except for field rotation, and non-geostationary objects as drifting. RANSAC-MT is used to detect the drifter. In this set of data, the drifting space object was detected at a distance of 13800 km. The AFRL/RH set of data, collected in the stare mode, contained the signature of two geostationary satellites. The signature of a moving object was simulated and added to the sequence of frames to determine the sensitivity in magnitude. The performance compares well with the more intensive TBD algorithms reported in the literature.

  19. A novel accelerometry-based algorithm for the detection of step durations over short episodes of gait in healthy elderly.

    PubMed

    Micó-Amigo, M Encarna; Kingma, Idsart; Ainsworth, Erik; Walgaard, Stefan; Niessen, Martijn; van Lummel, Rob C; van Dieën, Jaap H

    2016-04-19

    The assessment of short episodes of gait is clinically relevant and easily implemented, especially given limited space and time requirements. BFS (body-fixed-sensors) are small, lightweight and easy to wear sensors, which allow the assessment of gait at relative low cost and with low interference. Thus, the assessment with BFS of short episodes of gait, extracted from dailylife physical activity or measured in a standardised and supervised setting, may add value in the study of gait quality of the elderly. The aim of this study was to evaluate the accuracy of a novel algorithm based on acceleration signals recorded at different human locations (lower back and heels) for the detection of step durations over short episodes of gait in healthy elderly subjects. Twenty healthy elderly subjects (73.7 ± 7.9 years old) walked twice a distance of 5 m, wearing a BFS on the lower back, and on the outside of each heel. Moreover, an optoelectronic three-dimensional (3D) motion tracking system was used to detect step durations. A novel algorithm is presented for the detection of step durations from low-back and heel acceleration signals separately. The accuracy of the algorithm was assessed by comparing absolute differences in step duration between the three methods: step detection from the optoelectronic 3D motion tracking system, step detection from the application of the novel algorithm to low-back accelerations, and step detection from the application of the novel algorithm to heel accelerations. The proposed algorithm successfully detected all the steps, without false positives and without false negatives. Absolute average differences in step duration within trials and across subjects were calculated for each comparison, between low-back accelerations and the optoelectronic system were on average 22.4 ± 7.6 ms (4.0 ± 1.3 % of average step duration), between heel accelerations and the optoelectronic system were on average 20.7 ± 11.8 ms (3.7 ± 1.9 %), and between low-back accelerations and heel accelerations were on average 27.8 ± 15.1 ms (4.9 ± 2.5 % of average step duration). This study showed that the presented novel algorithm detects step durations over short episodes of gait in healthy elderly subjects with acceptable accuracy from low-back and heel accelerations, which provides opportunities to extract a range of gait parameters from short episodes of gait.

  20. Fault Detection of Roller-Bearings Using Signal Processing and Optimization Algorithms

    PubMed Central

    Kwak, Dae-Ho; Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2014-01-01

    This study presents a fault detection of roller bearings through signal processing and optimization techniques. After the occurrence of scratch-type defects on the inner race of bearings, variations of kurtosis values are investigated in terms of two different data processing techniques: minimum entropy deconvolution (MED), and the Teager-Kaiser Energy Operator (TKEO). MED and the TKEO are employed to qualitatively enhance the discrimination of defect-induced repeating peaks on bearing vibration data with measurement noise. Given the perspective of the execution sequence of MED and the TKEO, the study found that the kurtosis sensitivity towards a defect on bearings could be highly improved. Also, the vibration signal from both healthy and damaged bearings is decomposed into multiple intrinsic mode functions (IMFs), through empirical mode decomposition (EMD). The weight vectors of IMFs become design variables for a genetic algorithm (GA). The weights of each IMF can be optimized through the genetic algorithm, to enhance the sensitivity of kurtosis on damaged bearing signals. Experimental results show that the EMD-GA approach successfully improved the resolution of detectability between a roller bearing with defect, and an intact system. PMID:24368701

  1. Development of a Fault Monitoring Technique for Wind Turbines Using a Hidden Markov Model.

    PubMed

    Shin, Sung-Hwan; Kim, SangRyul; Seo, Yun-Ho

    2018-06-02

    Regular inspection for the maintenance of the wind turbines is difficult because of their remote locations. For this reason, condition monitoring systems (CMSs) are typically installed to monitor their health condition. The purpose of this study is to propose a fault detection algorithm for the mechanical parts of the wind turbine. To this end, long-term vibration data were collected over two years by a CMS installed on a 3 MW wind turbine. The vibration distribution at a specific rotating speed of main shaft is approximated by the Weibull distribution and its cumulative distribution function is utilized for determining the threshold levels that indicate impending failure of mechanical parts. A Hidden Markov model (HMM) is employed to propose the statistical fault detection algorithm in the time domain and the method whereby the input sequence for HMM is extracted is also introduced by considering the threshold levels and the correlation between the signals. Finally, it was demonstrated that the proposed HMM algorithm achieved a greater than 95% detection success rate by using the long-term signals.

  2. Towards the run and walk activity classification through step detection--an android application.

    PubMed

    Oner, Melis; Pulcifer-Stump, Jeffry A; Seeling, Patrick; Kaya, Tolga

    2012-01-01

    Falling is one of the most common accidents with potentially irreversible consequences, especially considering special groups, such as the elderly or disabled. One approach to solve this issue would be an early detection of the falling event. Towards reaching the goal of early fall detection, we have worked on distinguishing and monitoring some basic human activities such as walking and running. Since we plan to implement the system mostly for seniors and the disabled, simplicity of the usage becomes very important. We have successfully implemented an algorithm that would not require the acceleration sensor to be fixed in a specific position (the smart phone itself in our application), whereas most of the previous research dictates the sensor to be fixed in a certain direction. This algorithm reviews data from the accelerometer to determine if a user has taken a step or not and keeps track of the total amount of steps. After testing, the algorithm was more accurate than a commercial pedometer in terms of comparing outputs to the actual number of steps taken by the user.

  3. Discrimination of Biomass Burning Smoke and Clouds in MAIAC Algorithm

    NASA Technical Reports Server (NTRS)

    Lyapustin, A.; Korkin, S.; Wang, Y.; Quayle, B.; Laszlo, I.

    2012-01-01

    The multi-angle implementation of atmospheric correction (MAIAC) algorithm makes aerosol retrievals from MODIS data at 1 km resolution providing information about the fine scale aerosol variability. This information is required in different applications such as urban air quality analysis, aerosol source identification etc. The quality of high resolution aerosol data is directly linked to the quality of cloud mask, in particular detection of small (sub-pixel) and low clouds. This work continues research in this direction, describing a technique to detect small clouds and introducing the smoke test to discriminate the biomass burning smoke from the clouds. The smoke test relies on a relative increase of aerosol absorption at MODIS wavelength 0.412 micrometers as compared to 0.47-0.67 micrometers due to multiple scattering and enhanced absorption by organic carbon released during combustion. This general principle has been successfully used in the OMI detection of absorbing aerosols based on UV measurements. This paper provides the algorithm detail and illustrates its performance on two examples of wildfires in US Pacific North-West and in Georgia/Florida of 2007.

  4. Computer aided detection of tumor and edema in brain FLAIR magnetic resonance image using ANN

    NASA Astrophysics Data System (ADS)

    Pradhan, Nandita; Sinha, A. K.

    2008-03-01

    This paper presents an efficient region based segmentation technique for detecting pathological tissues (Tumor & Edema) of brain using fluid attenuated inversion recovery (FLAIR) magnetic resonance (MR) images. This work segments FLAIR brain images for normal and pathological tissues based on statistical features and wavelet transform coefficients using k-means algorithm. The image is divided into small blocks of 4×4 pixels. The k-means algorithm is used to cluster the image based on the feature vectors of blocks forming different classes representing different regions in the whole image. With the knowledge of the feature vectors of different segmented regions, supervised technique is used to train Artificial Neural Network using fuzzy back propagation algorithm (FBPA). Segmentation for detecting healthy tissues and tumors has been reported by several researchers by using conventional MRI sequences like T1, T2 and PD weighted sequences. This work successfully presents segmentation of healthy and pathological tissues (both Tumors and Edema) using FLAIR images. At the end pseudo coloring of segmented and classified regions are done for better human visualization.

  5. The impact of privacy protection filters on gender recognition

    NASA Astrophysics Data System (ADS)

    Ruchaud, Natacha; Antipov, Grigory; Korshunov, Pavel; Dugelay, Jean-Luc; Ebrahimi, Touradj; Berrani, Sid-Ahmed

    2015-09-01

    Deep learning-based algorithms have become increasingly efficient in recognition and detection tasks, especially when they are trained on large-scale datasets. Such recent success has led to a speculation that deep learning methods are comparable to or even outperform human visual system in its ability to detect and recognize objects and their features. In this paper, we focus on the specific task of gender recognition in images when they have been processed by privacy protection filters (e.g., blurring, masking, and pixelization) applied at different strengths. Assuming a privacy protection scenario, we compare the performance of state of the art deep learning algorithms with a subjective evaluation obtained via crowdsourcing to understand how privacy protection filters affect both machine and human vision.

  6. [Non-destructive detection research for hollow heart of potato based on semi-transmission hyperspectral imaging and SVM].

    PubMed

    Huang, Tao; Li, Xiao-yu; Xu, Meng-ling; Jin, Rui; Ku, Jing; Xu, Sen-miao; Wu, Zhen-zhong

    2015-01-01

    The quality of potato is directly related to their edible value and industrial value. Hollow heart of potato, as a physiological disease occurred inside the tuber, is difficult to be detected. This paper put forward a non-destructive detection method by using semi-transmission hyperspectral imaging with support vector machine (SVM) to detect hollow heart of potato. Compared to reflection and transmission hyperspectral image, semi-transmission hyperspectral image can get clearer image which contains the internal quality information of agricultural products. In this study, 224 potato samples (149 normal samples and 75 hollow samples) were selected as the research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images (390-1 040 nn) of the potato samples, and then the average spectrum of region of interest were extracted for spectral characteristics analysis. Normalize was used to preprocess the original spectrum, and prediction model were developed based on SVM using all wave bands, the accurate recognition rate of test set is only 87. 5%. In order to simplify the model competitive.adaptive reweighed sampling algorithm (CARS) and successive projection algorithm (SPA) were utilized to select important variables from the all 520 spectral variables and 8 variables were selected (454, 601, 639, 664, 748, 827, 874 and 936 nm). 94. 64% of the accurate recognition rate of test set was obtained by using the 8 variables to develop SVM model. Parameter optimization algorithms, including artificial fish swarm algorithm (AFSA), genetic algorithm (GA) and grid search algorithm, were used to optimize the SVM model parameters: penalty parameter c and kernel parameter g. After comparative analysis, AFSA, a new bionic optimization algorithm based on the foraging behavior of fish swarm, was proved to get the optimal model parameter (c=10. 659 1, g=0. 349 7), and the recognition accuracy of 10% were obtained for the AFSA-SVM model. The results indicate that combining the semi-transmission hyperspectral imaging technology with CARS-SPA and AFSA-SVM can accurately detect hollow heart of potato, and also provide technical support for rapid non-destructive detecting of hollow heart of potato.

  7. Mass detection with digitized screening mammograms by using Gabor features

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Agyepong, Kwabena

    2007-03-01

    Breast cancer is the leading cancer among American women. The current lifetime risk of developing breast cancer is 13.4% (one in seven). Mammography is the most effective technology presently available for breast cancer screening. With digital mammograms computer-aided detection (CAD) has proven to be a useful tool for radiologists. In this paper, we focus on mass detection that is a common category of breast cancers relative to calcification and architecture distortion. We propose a new mass detection algorithm utilizing Gabor filters, termed as "Gabor Mass Detection" (GMD). There are three steps in the GMD algorithm, (1) preprocessing, (2) generating alarms and (3) classification (reducing false alarms). Down-sampling, quantization, denoising and enhancement are done in the preprocessing step. Then a total of 30 Gabor filtered images (along 6 bands by 5 orientations) are produced. Alarm segments are generated by thresholding four Gabor images of full orientations (Stage-I classification) with image-dependent thresholds computed via histogram analysis. Next a set of edge histogram descriptors (EHD) are extracted from 24 Gabor images (6 by 4) that will be used for Stage-II classification. After clustering EHD features with fuzzy C-means clustering method, a k-nearest neighbor classifier is used to reduce the number of false alarms. We initially analyzed 431 digitized mammograms (159 normal images vs. 272 cancerous images, from the DDSM project, University of South Florida) with the proposed GMD algorithm. And a ten-fold cross validation was used for testing the GMD algorithm upon the available data. The GMD performance is as follows: sensitivity (true positive rate) = 0.88 at false positives per image (FPI) = 1.25, and the area under the ROC curve = 0.83. The overall performance of the GMD algorithm is satisfactory and the accuracy of locating masses (highlighting the boundaries of suspicious areas) is relatively high. Furthermore, the GMD algorithm can successfully detect early-stage (with small values of Assessment & low Subtlety) malignant masses. In addition, Gabor filtered images are used in both stages of classifications, which greatly simplifies the GMD algorithm.

  8. Detection of deception: Event-related potential markers of attention and cognitive control during intentional false responses.

    PubMed

    Gibbons, Henning; Schnuerch, Robert; Wittinghofer, Christina; Armbrecht, Anne-Simone; Stahl, Jutta

    2018-06-01

    Successful deception requires the coordination of multiple mental processes, such as attention, conflict monitoring, and the regulation of emotion. We employed a simple classification task, assessing ERPs to further investigate the attentional and cognitive control components of (instructed) deception. In Experiment 1, 20 participants repeatedly categorized visually presented names of five animals and five plants. Prior to the experiment, however, each participant covertly selected one animal and one plant for deliberate misclassification. For these critical items, we observed significantly increased response times (RTs), error rates, and amplitudes of three ERP components: anterior P3a indicating the processing of task relevance, medial-frontal negativity reflecting conflict monitoring, and posterior P3b indicating sustained visual attention. In a blind identification of the individual critical words based on a priori defined criteria, an algorithm using two behavioral and two ERP measures combined showed a sensitivity of 0.73 and a specificity of 0.95, thus performing far above chance (0.2/0.2). Experiment 2 used five clothing and five furniture names and successfully replicated the findings of Experiment 1 in 25 new participants. For detection of the critical words, the algorithm from Experiment 1 was reused with only slight adjustments of the ERP time windows. This resulted in a very high detection performance (sensitivity 0.88, specificity 0.94) and significantly outperformed an algorithm based on RT alone. Thus, at least under controlled laboratory conditions, a highly accurate detection of instructed lies via the attentional and cognitive control components is feasible, and benefits strongly from combined behavioral and ERP measures. © 2017 Society for Psychophysiological Research.

  9. Anatomy-Based Algorithms for Detecting Oral Cancer Using Reflectance and Fluorescence Spectroscopy

    PubMed Central

    McGee, Sasha; Mardirossian, Vartan; Elackattu, Alphi; Mirkovic, Jelena; Pistey, Robert; Gallagher, George; Kabani, Sadru; Yu, Chung-Chieh; Wang, Zimmern; Badizadegan, Kamran; Grillone, Gregory; Feld, Michael S.

    2010-01-01

    Objectives We used reflectance and fluorescence spectroscopy to noninvasively and quantitatively distinguish benign from dysplastic/malignant oral lesions. We designed diagnostic algorithms to account for differences in the spectral properties among anatomic sites (gingiva, buccal mucosa, etc). Methods In vivo reflectance and fluorescence spectra were collected from 71 patients with oral lesions. The tissue was then biopsied and the specimen evaluated by histopathology. Quantitative parameters related to tissue morphology and biochemistry were extracted from the spectra. Diagnostic algorithms specific for combinations of sites with similar spectral properties were developed. Results Discrimination of benign from dysplastic/malignant lesions was most successful when algorithms were designed for individual sites (area under the receiver operator characteristic curve [ROC-AUC], 0.75 for the lateral surface of the tongue) and was least accurate when all sites were combined (ROC-AUC, 0.60). The combination of sites with similar spectral properties (floor of mouth and lateral surface of the tongue) yielded an ROC-AUC of 0.71. Conclusions Accurate spectroscopic detection of oral disease must account for spectral variations among anatomic sites. Anatomy-based algorithms for single sites or combinations of sites demonstrated good diagnostic performance in distinguishing benign lesions from dysplastic/malignant lesions and consistently performed better than algorithms developed for all sites combined. PMID:19999369

  10. Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications

    NASA Astrophysics Data System (ADS)

    He, K.; Zhu, W. D.

    2011-07-01

    A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.

  11. Classification of change detection and change blindness from near-infrared spectroscopy signals

    NASA Astrophysics Data System (ADS)

    Tanaka, Hirokazu; Katura, Takusige

    2011-08-01

    Using a machine-learning classification algorithm applied to near-infrared spectroscopy (NIRS) signals, we classify a success (change detection) or a failure (change blindness) in detecting visual changes for a change-detection task. Five subjects perform a change-detection task, and their brain activities are continuously monitored. A support-vector-machine algorithm is applied to classify the change-detection and change-blindness trials, and correct classification probability of 70-90% is obtained for four subjects. Two types of temporal shapes in classification probabilities are found: one exhibiting a maximum value after the task is completed (postdictive type), and another exhibiting a maximum value during the task (predictive type). As for the postdictive type, the classification probability begins to increase immediately after the task completion and reaches its maximum in about the time scale of neuronal hemodynamic response, reflecting a subjective report of change detection. As for the predictive type, the classification probability shows an increase at the task initiation and is maximal while subjects are performing the task, predicting the task performance in detecting a change. We conclude that decoding change detection and change blindness from NIRS signal is possible and argue some future applications toward brain-machine interfaces.

  12. Edge detection and localization with edge pattern analysis and inflection characterization

    NASA Astrophysics Data System (ADS)

    Jiang, Bo

    2012-05-01

    In general edges are considered to be abrupt changes or discontinuities in two dimensional image signal intensity distributions. The accuracy of front-end edge detection methods in image processing impacts the eventual success of higher level pattern analysis downstream. To generalize edge detectors designed from a simple ideal step function model to real distortions in natural images, research on one dimensional edge pattern analysis to improve the accuracy of edge detection and localization proposes an edge detection algorithm, which is composed by three basic edge patterns, such as ramp, impulse, and step. After mathematical analysis, general rules for edge representation based upon the classification of edge types into three categories-ramp, impulse, and step (RIS) are developed to reduce detection and localization errors, especially reducing "double edge" effect that is one important drawback to the derivative method. But, when applying one dimensional edge pattern in two dimensional image processing, a new issue is naturally raised that the edge detector should correct marking inflections or junctions of edges. Research on human visual perception of objects and information theory pointed out that a pattern lexicon of "inflection micro-patterns" has larger information than a straight line. Also, research on scene perception gave an idea that contours have larger information are more important factor to determine the success of scene categorization. Therefore, inflections or junctions are extremely useful features, whose accurate description and reconstruction are significant in solving correspondence problems in computer vision. Therefore, aside from adoption of edge pattern analysis, inflection or junction characterization is also utilized to extend traditional derivative edge detection algorithm. Experiments were conducted to test my propositions about edge detection and localization accuracy improvements. The results support the idea that these edge detection method improvements are effective in enhancing the accuracy of edge detection and localization.

  13. Towards a hemodynamic BCI using transcranial Doppler without user-specific training data

    NASA Astrophysics Data System (ADS)

    Aleem, Idris; Chau, Tom

    2013-02-01

    Transcranial Doppler (TCD) was recently introduced as a new brain-computer interface (BCI) modality for detecting task-induced hemispheric lateralization. To date, single-trial discrimination between a lateralized mental activity and a rest state has been demonstrated with long (45 s) activation time periods. However, the possibility of detecting successive activations in a user-independent framework (i.e. without training data from the user) remains an open question. Objective. The objective of this research was to assess TCD-based detection of lateralized mental activity with a user-independent classifier. In so doing, we also investigated the accuracy of detecting successive lateralizations. Approach. TCD data from 18 participants were collected during verbal fluency, mental rotation tasks and baseline counting tasks. Linear discriminant analysis and a set of four time-domain features were used to classify successive left and right brain activations. Main results. In a user-independent framework, accuracies up to 74.6 ± 12.6% were achieved using training data from a single participant, and lateralization task durations of 18 s. Significance. Subject-independent, algorithmic classification of TCD signals corresponding to successive brain lateralization may be a feasible paradigm for TCD-BCI design.

  14. Breast cancer detection using time reversal

    NASA Astrophysics Data System (ADS)

    Sheikh Sajjadieh, Mohammad Hossein

    Breast cancer is the second leading cause of cancer death after lung cancer among women. Mammography and magnetic resonance imaging (MRI) have certain limitations in detecting breast cancer, especially during its early stage of development. A number of studies have shown that microwave breast cancer detection has potential to become a successful clinical complement to the conventional X-ray mammography. Microwave breast imaging is performed by illuminating the breast tissues with an electromagnetic waveform and recording its reflections (backscatters) emanating from variations in the normal breast tissues and tumour cells, if present, using an antenna array. These backscatters, referred to as the overall (tumour and clutter) response, are processed to estimate the tumour response, which is applied as input to array imaging algorithms used to estimate the location of the tumour. Due to changes in the breast profile over time, the commonly utilized background subtraction procedures used to estimate the target (tumour) response in array processing are impractical for breast cancer detection. The thesis proposes a new tumour estimation algorithm based on a combination of the data adaptive filter with the envelope detection filter (DAF/EDF), which collectively do not require a training step. After establishing the superiority of the DAF/EDF based approach, the thesis shows that the time reversal (TR) array imaging algorithms outperform their conventional conterparts in detecting and localizing tumour cells in breast tissues at SNRs ranging from 15 to 30dB.

  15. Combining contour detection algorithms for the automatic extraction of the preparation line from a dental 3D measurement

    NASA Astrophysics Data System (ADS)

    Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut

    2005-04-01

    Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.

  16. Photoplethysmograph signal reconstruction based on a novel hybrid motion artifact detection-reduction approach. Part I: Motion and noise artifact detection.

    PubMed

    Chong, Jo Woon; Dao, Duy K; Salehizadeh, S M A; McManus, David D; Darling, Chad E; Chon, Ki H; Mendelson, Yitzhak

    2014-11-01

    Motion and noise artifacts (MNA) are a serious obstacle in utilizing photoplethysmogram (PPG) signals for real-time monitoring of vital signs. We present a MNA detection method which can provide a clean vs. corrupted decision on each successive PPG segment. For motion artifact detection, we compute four time-domain parameters: (1) standard deviation of peak-to-peak intervals (2) standard deviation of peak-to-peak amplitudes (3) standard deviation of systolic and diastolic interval ratios, and (4) mean standard deviation of pulse shape. We have adopted a support vector machine (SVM) which takes these parameters from clean and corrupted PPG signals and builds a decision boundary to classify them. We apply several distinct features of the PPG data to enhance classification performance. The algorithm we developed was verified on PPG data segments recorded by simulation, laboratory-controlled and walking/stair-climbing experiments, respectively, and we compared several well-established MNA detection methods to our proposed algorithm. All compared detection algorithms were evaluated in terms of motion artifact detection accuracy, heart rate (HR) error, and oxygen saturation (SpO2) error. For laboratory controlled finger, forehead recorded PPG data and daily-activity movement data, our proposed algorithm gives 94.4, 93.4, and 93.7% accuracies, respectively. Significant reductions in HR and SpO2 errors (2.3 bpm and 2.7%) were noted when the artifacts that were identified by SVM-MNA were removed from the original signal than without (17.3 bpm and 5.4%). The accuracy and error values of our proposed method were significantly higher and lower, respectively, than all other detection methods. Another advantage of our method is its ability to provide highly accurate onset and offset detection times of MNAs. This capability is important for an automated approach to signal reconstruction of only those data points that need to be reconstructed, which is the subject of the companion paper to this article. Finally, our MNA detection algorithm is real-time realizable as the computational speed on the 7-s PPG data segment was found to be only 7 ms with a Matlab code.

  17. Vision-based method for detecting driver drowsiness and distraction in driver monitoring system

    NASA Astrophysics Data System (ADS)

    Jo, Jaeik; Lee, Sung Joo; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie

    2011-12-01

    Most driver-monitoring systems have attempted to detect either driver drowsiness or distraction, although both factors should be considered for accident prevention. Therefore, we propose a new driver-monitoring method considering both factors. We make the following contributions. First, if the driver is looking ahead, drowsiness detection is performed; otherwise, distraction detection is performed. Thus, the computational cost and eye-detection error can be reduced. Second, we propose a new eye-detection algorithm that combines adaptive boosting, adaptive template matching, and blob detection with eye validation, thereby reducing the eye-detection error and processing time significantly, which is hardly achievable using a single method. Third, to enhance eye-detection accuracy, eye validation is applied after initial eye detection, using a support vector machine based on appearance features obtained by principal component analysis (PCA) and linear discriminant analysis (LDA). Fourth, we propose a novel eye state-detection algorithm that combines appearance features obtained using PCA and LDA, with statistical features such as the sparseness and kurtosis of the histogram from the horizontal edge image of the eye. Experimental results showed that the detection accuracies of the eye region and eye states were 99 and 97%, respectively. Both driver drowsiness and distraction were detected with a success rate of 98%.

  18. Measurement of pattern roughness and local size variation using CD-SEM: current status

    NASA Astrophysics Data System (ADS)

    Fukuda, Hiroshi; Kawasaki, Takahiro; Kawada, Hiroki; Sakai, Kei; Kato, Takashi; Yamaguchi, Satoru; Ikota, Masami; Momonoi, Yoshinori

    2018-03-01

    Measurement of line edge roughness (LER) is discussed from four aspects: edge detection, PSD prediction, sampling strategy, and noise mitigation, and general guidelines and practical solutions for LER measurement today are introduced. Advanced edge detection algorithms such as wave-matching method are shown effective for robustly detecting edges from low SNR images, while conventional algorithm with weak filtering is still effective in suppressing SEM noise and aliasing. Advanced PSD prediction method such as multi-taper method is effective in suppressing sampling noise within a line edge to analyze, while number of lines is still required for suppressing line to line variation. Two types of SEM noise mitigation methods, "apparent noise floor" subtraction method and LER-noise decomposition using regression analysis are verified to successfully mitigate SEM noise from PSD curves. These results are extended to LCDU measurement to clarify the impact of SEM noise and sampling noise on LCDU.

  19. Real-Time Detection of Rupture Development: Earthquake Early Warning Using P Waves From Growing Ruptures

    NASA Astrophysics Data System (ADS)

    Kodera, Yuki

    2018-01-01

    Large earthquakes with long rupture durations emit P wave energy throughout the rupture period. Incorporating late-onset P waves into earthquake early warning (EEW) algorithms could contribute to robust predictions of strong ground motion. Here I describe a technique to detect in real time P waves from growing ruptures to improve the timeliness of an EEW algorithm based on seismic wavefield estimation. The proposed P wave detector, which employs a simple polarization analysis, successfully detected P waves from strong motion generation areas of the 2011 Mw 9.0 Tohoku-oki earthquake rupture. An analysis using 23 large (M ≥ 7) events from Japan confirmed that seismic intensity predictions based on the P wave detector significantly increased lead times without appreciably decreasing the prediction accuracy. P waves from growing ruptures, being one of the fastest carriers of information on ongoing rupture development, have the potential to improve the performance of EEW systems.

  20. Community structure from spectral properties in complex networks

    NASA Astrophysics Data System (ADS)

    Servedio, V. D. P.; Colaiori, F.; Capocci, A.; Caldarelli, G.

    2005-06-01

    We analyze the spectral properties of complex networks focusing on their relation to the community structure, and develop an algorithm based on correlations among components of different eigenvectors. The algorithm applies to general weighted networks, and, in a suitably modified version, to the case of directed networks. Our method allows to correctly detect communities in sharply partitioned graphs, however it is useful to the analysis of more complex networks, without a well defined cluster structure, as social and information networks. As an example, we test the algorithm on a large scale data-set from a psychological experiment of free word association, where it proves to be successful both in clustering words, and in uncovering mental association patterns.

  1. A fast positioning algorithm for the asymmetric dual Mach-Zehnder interferometric infrared fiber vibration sensor

    NASA Astrophysics Data System (ADS)

    Jiang, Junfeng; An, Jianchang; Liu, Kun; Ma, Chunyu; Li, Zhichen; Liu, Tiegen

    2017-09-01

    We propose a fast positioning algorithm for the asymmetric dual Mach-Zehnder interferometric infrared fiber vibration sensor. Using the approximately derivation method and the enveloping detection method, we successfully eliminate the asymmetry of the interference outputs and improve the processing speed. A positioning measurement experiment was carried out to verify the effectiveness of the proposed algorithm. At the sensing length of 85 km, the experimental results show that the mean positioning error is 18.9 m and the mean processing time is 116 ms. The processing speed is improved by 5 times compared to what can be achieved by using the traditional time-frequency analysis-based positioning method.

  2. Functional grouping of similar genes using eigenanalysis on minimum spanning tree based neighborhood graph.

    PubMed

    Jothi, R; Mohanty, Sraban Kumar; Ojha, Aparajita

    2016-04-01

    Gene expression data clustering is an important biological process in DNA microarray analysis. Although there have been many clustering algorithms for gene expression analysis, finding a suitable and effective clustering algorithm is always a challenging problem due to the heterogeneous nature of gene profiles. Minimum Spanning Tree (MST) based clustering algorithms have been successfully employed to detect clusters of varying shapes and sizes. This paper proposes a novel clustering algorithm using Eigenanalysis on Minimum Spanning Tree based neighborhood graph (E-MST). As MST of a set of points reflects the similarity of the points with their neighborhood, the proposed algorithm employs a similarity graph obtained from k(') rounds of MST (k(')-MST neighborhood graph). By studying the spectral properties of the similarity matrix obtained from k(')-MST graph, the proposed algorithm achieves improved clustering results. We demonstrate the efficacy of the proposed algorithm on 12 gene expression datasets. Experimental results show that the proposed algorithm performs better than the standard clustering algorithms. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Human location estimation using thermopile array sensor

    NASA Astrophysics Data System (ADS)

    Parnin, S.; Rahman, M. M.

    2017-11-01

    Utilization of Thermopile sensor at an early stage of human detection is challenging as there are many things that produce thermal heat other than human such as electrical appliances and animals. Therefrom, an algorithm for early presence detection has been developed through the study of human body temperature behaviour with respect to the room temperature. The change in non-contact detected temperature of human varied according to body parts. In an indoor room, upper parts of human body change up to 3°C whereas lower part ranging from 0.58°C to 1.71°C. The average changes in temperature of human is used as a conditional set-point value in the program algorithm to detect human presence. The current position of human and its respective angle is gained when human is presence at certain pixels of Thermopile’s sensor array. Human position is estimated successfully as the developed sensory system is tested to the actuator of a stand fan.

  4. Electromagnetic Induction Spectroscopy for the Detection of Subsurface Targets

    DTIC Science & Technology

    2012-12-01

    curves of the proposed method and that of Fails et al.. For the kNN ROC curve, k = 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81...et al. [6] and Ramachandran et al. [7] both demonstrated success in detecting mines using the k-nearest-neighbor ( kNN ) algorithm based on the EMI...error is also included in the feature vector. The kNN labels an unknown target based on the closest targets in a training set. Collins et al. [2] and

  5. Experiments and Analysis of Close-Shot Identification of On-Branch Citrus Fruit with RealSense

    PubMed Central

    Liu, Jizhan; Yuan, Yan; Zhou, Yao; Zhu, Xinxin

    2018-01-01

    Fruit recognition based on depth information has been a hot topic due to its advantages. However, the present equipment and methods cannot meet the requirements of rapid and reliable recognition and location of fruits in close shot for robot harvesting. To solve this problem, we propose a recognition algorithm for citrus fruit based on RealSense. This method effectively utilizes depth-point cloud data in a close-shot range of 160 mm and different geometric features of the fruit and leaf to recognize fruits with a intersection curve cut by the depth-sphere. Experiments with close-shot recognition of six varieties of fruit under different conditions were carried out. The detection rates of little occlusion and adhesion were from 80–100%. However, severe occlusion and adhesion still have a great influence on the overall success rate of on-branch fruits recognition, the rate being 63.8%. The size of the fruit has a more noticeable impact on the success rate of detection. Moreover, due to close-shot near-infrared detection, there was no obvious difference in recognition between bright and dark conditions. The advantages of close-shot limited target detection with RealSense, fast foreground and background removal and the simplicity of the algorithm with high precision may contribute to high real-time vision-servo operations of harvesting robots. PMID:29751594

  6. Creation of an Accurate Algorithm to Detect Snellen Best Documented Visual Acuity from Ophthalmology Electronic Health Record Notes.

    PubMed

    Mbagwu, Michael; French, Dustin D; Gill, Manjot; Mitchell, Christopher; Jackson, Kathryn; Kho, Abel; Bryar, Paul J

    2016-05-04

    Visual acuity is the primary measure used in ophthalmology to determine how well a patient can see. Visual acuity for a single eye may be recorded in multiple ways for a single patient visit (eg, Snellen vs. Jäger units vs. font print size), and be recorded for either distance or near vision. Capturing the best documented visual acuity (BDVA) of each eye in an individual patient visit is an important step for making electronic ophthalmology clinical notes useful in research. Currently, there is limited methodology for capturing BDVA in an efficient and accurate manner from electronic health record (EHR) notes. We developed an algorithm to detect BDVA for right and left eyes from defined fields within electronic ophthalmology clinical notes. We designed an algorithm to detect the BDVA from defined fields within 295,218 ophthalmology clinical notes with visual acuity data present. About 5668 unique responses were identified and an algorithm was developed to map all of the unique responses to a structured list of Snellen visual acuities. Visual acuity was captured from a total of 295,218 ophthalmology clinical notes during the study dates. The algorithm identified all visual acuities in the defined visual acuity section for each eye and returned a single BDVA for each eye. A clinician chart review of 100 random patient notes showed a 99% accuracy detecting BDVA from these records and 1% observed error. Our algorithm successfully captures best documented Snellen distance visual acuity from ophthalmology clinical notes and transforms a variety of inputs into a structured Snellen equivalent list. Our work, to the best of our knowledge, represents the first attempt at capturing visual acuity accurately from large numbers of electronic ophthalmology notes. Use of this algorithm can benefit research groups interested in assessing visual acuity for patient centered outcome. All codes used for this study are currently available, and will be made available online at https://phekb.org.

  7. Creation of an Accurate Algorithm to Detect Snellen Best Documented Visual Acuity from Ophthalmology Electronic Health Record Notes

    PubMed Central

    French, Dustin D; Gill, Manjot; Mitchell, Christopher; Jackson, Kathryn; Kho, Abel; Bryar, Paul J

    2016-01-01

    Background Visual acuity is the primary measure used in ophthalmology to determine how well a patient can see. Visual acuity for a single eye may be recorded in multiple ways for a single patient visit (eg, Snellen vs. Jäger units vs. font print size), and be recorded for either distance or near vision. Capturing the best documented visual acuity (BDVA) of each eye in an individual patient visit is an important step for making electronic ophthalmology clinical notes useful in research. Objective Currently, there is limited methodology for capturing BDVA in an efficient and accurate manner from electronic health record (EHR) notes. We developed an algorithm to detect BDVA for right and left eyes from defined fields within electronic ophthalmology clinical notes. Methods We designed an algorithm to detect the BDVA from defined fields within 295,218 ophthalmology clinical notes with visual acuity data present. About 5668 unique responses were identified and an algorithm was developed to map all of the unique responses to a structured list of Snellen visual acuities. Results Visual acuity was captured from a total of 295,218 ophthalmology clinical notes during the study dates. The algorithm identified all visual acuities in the defined visual acuity section for each eye and returned a single BDVA for each eye. A clinician chart review of 100 random patient notes showed a 99% accuracy detecting BDVA from these records and 1% observed error. Conclusions Our algorithm successfully captures best documented Snellen distance visual acuity from ophthalmology clinical notes and transforms a variety of inputs into a structured Snellen equivalent list. Our work, to the best of our knowledge, represents the first attempt at capturing visual acuity accurately from large numbers of electronic ophthalmology notes. Use of this algorithm can benefit research groups interested in assessing visual acuity for patient centered outcome. All codes used for this study are currently available, and will be made available online at https://phekb.org. PMID:27146002

  8. Automatic detection of referral patients due to retinal pathologies through data mining.

    PubMed

    Quellec, Gwenolé; Lamard, Mathieu; Erginay, Ali; Chabouis, Agnès; Massin, Pascale; Cochener, Béatrice; Cazuguel, Guy

    2016-04-01

    With the increased prevalence of retinal pathologies, automating the detection of these pathologies is becoming more and more relevant. In the past few years, many algorithms have been developed for the automated detection of a specific pathology, typically diabetic retinopathy, using eye fundus photography. No matter how good these algorithms are, we believe many clinicians would not use automatic detection tools focusing on a single pathology and ignoring any other pathology present in the patient's retinas. To solve this issue, an algorithm for characterizing the appearance of abnormal retinas, as well as the appearance of the normal ones, is presented. This algorithm does not focus on individual images: it considers examination records consisting of multiple photographs of each retina, together with contextual information about the patient. Specifically, it relies on data mining in order to learn diagnosis rules from characterizations of fundus examination records. The main novelty is that the content of examination records (images and context) is characterized at multiple levels of spatial and lexical granularity: 1) spatial flexibility is ensured by an adaptive decomposition of composite retinal images into a cascade of regions, 2) lexical granularity is ensured by an adaptive decomposition of the feature space into a cascade of visual words. This multigranular representation allows for great flexibility in automatically characterizing normality and abnormality: it is possible to generate diagnosis rules whose precision and generalization ability can be traded off depending on data availability. A variation on usual data mining algorithms, originally designed to mine static data, is proposed so that contextual and visual data at adaptive granularity levels can be mined. This framework was evaluated in e-ophtha, a dataset of 25,702 examination records from the OPHDIAT screening network, as well as in the publicly-available Messidor dataset. It was successfully applied to the detection of patients that should be referred to an ophthalmologist and also to the specific detection of several pathologies. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Diffusion tensor driven contour closing for cell microinjection targeting.

    PubMed

    Becattini, Gabriele; Mattos, Leonardo S; Caldwell, Darwin G

    2010-01-01

    This article introduces a novel approach to robust automatic detection of unstained living cells in bright-field (BF) microscope images with the goal of producing a target list for an automated microinjection system. The overall image analysis process is described and includes: preprocessing, ridge enhancement, image segmentation, shape analysis and injection point definition. The developed algorithm implements a new version of anisotropic contour completion (ACC) based on the partial differential equation (PDE) for heat diffusion which improves the cell segmentation process by elongating the edges only along their tangent direction. The developed ACC algorithm is equivalent to a dilation of the binary edge image with a continuous elliptic structural element that takes into account local orientation of the contours preventing extension towards normal direction. Experiments carried out on real images of 10 to 50 microm CHO-K1 adherent cells show a remarkable reliability in the algorithm along with up to 85% success for cell detection and injection point definition.

  10. Automated segmentation of knee and ankle regions of rats from CT images to quantify bone mineral density for monitoring treatments of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Cruz, Francisco; Sevilla, Raquel; Zhu, Joe; Vanko, Amy; Lee, Jung Hoon; Dogdas, Belma; Zhang, Weisheng

    2014-03-01

    Bone mineral density (BMD) obtained from a CT image is an imaging biomarker used pre-clinically for characterizing the Rheumatoid arthritis (RA) phenotype. We use this biomarker in animal studies for evaluating disease progression and for testing various compounds. In the current setting, BMD measurements are obtained manually by selecting the regions of interest from three-dimensional (3-D) CT images of rat legs, which results in a laborious and low-throughput process. Combining image processing techniques, such as intensity thresholding and skeletonization, with mathematical techniques in curve fitting and curvature calculations, we developed an algorithm for quick, consistent, and automatic detection of joints in large CT data sets. The implemented algorithm has reduced analysis time for a study with 200 CT images from 10 days to 3 days and has improved the robust detection of the obtained regions of interest compared with manual segmentation. This algorithm has been used successfully in over 40 studies.

  11. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  12. Topographic prominence discriminator for the detection of short-latency spikes of retinal ganglion cells

    NASA Astrophysics Data System (ADS)

    Choi, Myoung-Hwan; Ahn, Jungryul; Park, Dae Jin; Lee, Sang Min; Kim, Kwangsoo; Cho, Dong-il Dan; Senok, Solomon S.; Koo, Kyo-in; Goo, Yong Sook

    2017-02-01

    Objective. Direct stimulation of retinal ganglion cells in degenerate retinas by implanting epi-retinal prostheses is a recognized strategy for restoration of visual perception in patients with retinitis pigmentosa or age-related macular degeneration. Elucidating the best stimulus-response paradigms in the laboratory using multielectrode arrays (MEA) is complicated by the fact that the short-latency spikes (within 10 ms) elicited by direct retinal ganglion cell (RGC) stimulation are obscured by the stimulus artifact which is generated by the electrical stimulator. Approach. We developed an artifact subtraction algorithm based on topographic prominence discrimination, wherein the duration of prominences within the stimulus artifact is used as a strategy for identifying the artifact for subtraction and clarifying the obfuscated spikes which are then quantified using standard thresholding. Main results. We found that the prominence discrimination based filters perform creditably in simulation conditions by successfully isolating randomly inserted spikes in the presence of simple and even complex residual artifacts. We also show that the algorithm successfully isolated short-latency spikes in an MEA-based recording from degenerate mouse retinas, where the amplitude and frequency characteristics of the stimulus artifact vary according to the distance of the recording electrode from the stimulating electrode. By ROC analysis of false positive and false negative first spike detection rates in a dataset of one hundred and eight RGCs from four retinal patches, we found that the performance of our algorithm is comparable to that of a generally-used artifact subtraction filter algorithm which uses a strategy of local polynomial approximation (SALPA). Significance. We conclude that the application of topographic prominence discrimination is a valid and useful method for subtraction of stimulation artifacts with variable amplitudes and shapes. We propose that our algorithm may be used as stand-alone or supplementary to other artifact subtraction algorithms like SALPA.

  13. Pile-up correction algorithm based on successive integration for high count rate medical imaging and radiation spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-07-01

    In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.

  14. Context-specific selection of algorithms for recursive feature tracking in endoscopic image using a new methodology.

    PubMed

    Selka, F; Nicolau, S; Agnus, V; Bessaid, A; Marescaux, J; Soler, L

    2015-03-01

    In minimally invasive surgery, the tracking of deformable tissue is a critical component for image-guided applications. Deformation of the tissue can be recovered by tracking features using tissue surface information (texture, color,...). Recent work in this field has shown success in acquiring tissue motion. However, the performance evaluation of detection and tracking algorithms on such images are still difficult and are not standardized. This is mainly due to the lack of ground truth data on real data. Moreover, in order to avoid supplementary techniques to remove outliers, no quantitative work has been undertaken to evaluate the benefit of a pre-process based on image filtering, which can improve feature tracking robustness. In this paper, we propose a methodology to validate detection and feature tracking algorithms, using a trick based on forward-backward tracking that provides an artificial ground truth data. We describe a clear and complete methodology to evaluate and compare different detection and tracking algorithms. In addition, we extend our framework to propose a strategy to identify the best combinations from a set of detector, tracker and pre-process algorithms, according to the live intra-operative data. Experimental results have been performed on in vivo datasets and show that pre-process can have a strong influence on tracking performance and that our strategy to find the best combinations is relevant for a reasonable computation cost. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. True ion pick (TIPick): a denoising and peak picking algorithm to extract ion signals from liquid chromatography/mass spectrometry data.

    PubMed

    Ho, Tsung-Jung; Kuo, Ching-Hua; Wang, San-Yuan; Chen, Guan-Yuan; Tseng, Yufeng J

    2013-02-01

    Liquid Chromatography-Time of Flight Mass Spectrometry has become an important technique for toxicological screening and metabolomics. We describe TIPick a novel algorithm that accurately and sensitively detects target compounds in biological samples. TIPick comprises two main steps: background subtraction and peak picking. By subtracting a blank chromatogram, TIPick eliminates chemical signals of blank injections and reduces false positive results. TIPick detects peaks by calculating the S(CC(INI)) values of extracted ion chromatograms (EICs) without considering peak shapes, and it is able to detect tailing and fronting peaks. TIPick also uses duplicate injections to enhance the signals of the peaks and thus improve the peak detection power. Commonly seen split peaks caused by either saturation of the mass spectrometer detector or a mathematical background subtraction algorithm can be resolved by adjusting the mass error tolerance of the EICs and by comparing the EICs before and after background subtraction. The performance of TIPick was tested in a data set containing 297 standard mixtures; the recall, precision and F-score were 0.99, 0.97 and 0.98, respectively. TIPick was successfully used to construct and analyze the NTU MetaCore metabolomics chemical standards library, and it was applied for toxicological screening and metabolomics studies. Copyright © 2013 John Wiley & Sons, Ltd.

  16. A 16-Channel Nonparametric Spike Detection ASIC Based on EC-PC Decomposition.

    PubMed

    Wu, Tong; Xu, Jian; Lian, Yong; Khalili, Azam; Rastegarnia, Amir; Guan, Cuntai; Yang, Zhi

    2016-02-01

    In extracellular neural recording experiments, detecting neural spikes is an important step for reliable information decoding. A successful implementation in integrated circuits can achieve substantial data volume reduction, potentially enabling a wireless operation and closed-loop system. In this paper, we report a 16-channel neural spike detection chip based on a customized spike detection method named as exponential component-polynomial component (EC-PC) algorithm. This algorithm features a reliable prediction of spikes by applying a probability threshold. The chip takes raw data as input and outputs three data streams simultaneously: field potentials, band-pass filtered neural data, and spiking probability maps. The algorithm parameters are on-chip configured automatically based on input data, which avoids manual parameter tuning. The chip has been tested with both in vivo experiments for functional verification and bench-top experiments for quantitative performance assessment. The system has a total power consumption of 1.36 mW and occupies an area of 6.71 mm (2) for 16 channels. When tested on synthesized datasets with spikes and noise segments extracted from in vivo preparations and scaled according to required precisions, the chip outperforms other detectors. A credit card sized prototype board is developed to provide power and data management through a USB port.

  17. Comparison of four machine learning algorithms for their applicability in satellite-based optical rainfall retrievals

    NASA Astrophysics Data System (ADS)

    Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas

    2016-03-01

    Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.

  18. OGUPSA sensor scheduling architecture and algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Zhixiong; Hintz, Kenneth J.

    1996-06-01

    This paper introduces a new architecture for a sensor measurement scheduler as well as a dynamic sensor scheduling algorithm called the on-line, greedy, urgency-driven, preemptive scheduling algorithm (OGUPSA). OGUPSA incorporates a preemptive mechanism which uses three policies, (1) most-urgent-first (MUF), (2) earliest- completed-first (ECF), and (3) least-versatile-first (LVF). The three policies are used successively to dynamically allocate and schedule and distribute a set of arriving tasks among a set of sensors. OGUPSA also can detect the failure of a task to meet a deadline as well as generate an optimal schedule in the sense of minimum makespan for a group of tasks with the same priorities. A side benefit is OGUPSA's ability to improve dynamic load balance among all sensors while being a polynomial time algorithm. Results of a simulation are presented for a simple sensor system.

  19. Tsunami detection by high-frequency radar in British Columbia: performance assessment of the time-correlation algorithm for synthetic and real events

    NASA Astrophysics Data System (ADS)

    Guérin, Charles-Antoine; Grilli, Stéphan T.; Moran, Patrick; Grilli, Annette R.; Insua, Tania L.

    2018-05-01

    The authors recently proposed a new method for detecting tsunamis using high-frequency (HF) radar observations, referred to as "time-correlation algorithm" (TCA; Grilli et al. Pure Appl Geophys 173(12):3895-3934, 2016a, 174(1): 3003-3028, 2017). Unlike standard algorithms that detect surface current patterns, the TCA is based on analyzing space-time correlations of radar signal time series in pairs of radar cells, which does not require inverting radial surface currents. This was done by calculating a contrast function, which quantifies the change in pattern of the mean correlation between pairs of neighboring cells upon tsunami arrival, with respect to a reference correlation computed in the recent past. In earlier work, the TCA was successfully validated based on realistic numerical simulations of both the radar signal and tsunami wave trains. Here, this algorithm is adapted to apply to actual data from a HF radar installed in Tofino, BC, for three test cases: (1) a simulated far-field tsunami generated in the Semidi Subduction Zone in the Aleutian Arc; (2) a simulated near-field tsunami from a submarine mass failure on the continental slope off of Tofino; and (3) an event believed to be a meteotsunami, which occurred on October 14th, 2016, off of the Pacific West Coast and was measured by the radar. In the first two cases, the synthetic tsunami signal is superimposed onto the radar signal by way of a current memory term; in the third case, the tsunami signature is present within the radar data. In light of these test cases, we develop a detection methodology based on the TCA, using a correlation contrast function, and show that in all three cases the algorithm is able to trigger a timely early warning.

  20. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  1. An Algorithm for Neuropathic Pain Management in Older People.

    PubMed

    Pickering, Gisèle; Marcoux, Margaux; Chapiro, Sylvie; David, Laurence; Rat, Patrice; Michel, Micheline; Bertrand, Isabelle; Voute, Marion; Wary, Bernard

    2016-08-01

    Neuropathic pain frequently affects older people, who generally also have several comorbidities. Elderly patients are often poly-medicated, which increases the risk of drug-drug interactions. These patients, especially those with cognitive problems, may also have restricted communication skills, making pain evaluation difficult and pain treatment challenging. Clinicians and other healthcare providers need a decisional algorithm to optimize the recognition and management of neuropathic pain. We present a decisional algorithm developed by a multidisciplinary group of experts, which focuses on pain assessment and therapeutic options for the management of neuropathic pain, particularly in the elderly. The algorithm involves four main steps: (1) detection, (2) evaluation, (3) treatment, and (4) re-evaluation. The detection of neuropathic pain is an essential step in ensuring successful management. The extent of the impact of the neuropathic pain is then assessed, generally with self-report scales, except in patients with communication difficulties who can be assessed using behavioral scales. The management of neuropathic pain frequently requires combination treatments, and recommended treatments should be prescribed with caution in these elderly patients, taking into consideration their comorbidities and potential drug-drug interactions and adverse events. This algorithm can be used in the management of neuropathic pain in the elderly to ensure timely and adequate treatment by a multidisciplinary team.

  2. The detection of flaws in austenitic welds using the decomposition of the time-reversal operator

    NASA Astrophysics Data System (ADS)

    Cunningham, Laura J.; Mulholland, Anthony J.; Tant, Katherine M. M.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin

    2016-04-01

    The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm.

  3. An extended Kalman filter for mouse tracking.

    PubMed

    Choi, Hongjun; Kim, Mingi; Lee, Onseok

    2018-05-19

    Animal tracking is an important tool for observing behavior, which is useful in various research areas. Animal specimens can be tracked using dynamic models and observation models that require several types of data. Tracking mouse has several barriers due to the physical characteristics of the mouse, their unpredictable movement, and cluttered environments. Therefore, we propose a reliable method that uses a detection stage and a tracking stage to successfully track mouse. The detection stage detects the surface area of the mouse skin, and the tracking stage implements an extended Kalman filter to estimate the state variables of a nonlinear model. The changes in the overall shape of the mouse are tracked using an oval-shaped tracking model to estimate the parameters for the ellipse. An experiment is conducted to demonstrate the performance of the proposed tracking algorithm using six video images showing various types of movement, and the ground truth values for synthetic images are compared to the values generated by the tracking algorithm. A conventional manual tracking method is also applied to compare across eight experimenters. Furthermore, the effectiveness of the proposed tracking method is also demonstrated by applying the tracking algorithm with actual images of mouse. Graphical abstract.

  4. The detection of flaws in austenitic welds using the decomposition of the time-reversal operator

    PubMed Central

    Cunningham, Laura J.; Mulholland, Anthony J.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin

    2016-01-01

    The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm. PMID:27274683

  5. A novel cloning template designing method by using an artificial bee colony algorithm for edge detection of CNN based imaging sensors.

    PubMed

    Parmaksızoğlu, Selami; Alçı, Mustafa

    2011-01-01

    Cellular Neural Networks (CNNs) have been widely used recently in applications such as edge detection, noise reduction and object detection, which are among the main computer imaging processes. They can also be realized as hardware based imaging sensors. The fact that hardware CNN models produce robust and effective results has attracted the attention of researchers using these structures within image sensors. Realization of desired CNN behavior such as edge detection can be achieved by correctly setting a cloning template without changing the structure of the CNN. To achieve different behaviors effectively, designing a cloning template is one of the most important research topics in this field. In this study, the edge detecting process that is used as a preliminary process for segmentation, identification and coding applications is conducted by using CNN structures. In order to design the cloning template of goal-oriented CNN architecture, an Artificial Bee Colony (ABC) algorithm which is inspired from the foraging behavior of honeybees is used and the performance analysis of ABC for this application is examined with multiple runs. The CNN template generated by the ABC algorithm is tested by using artificial and real test images. The results are subjectively and quantitatively compared with well-known classical edge detection methods, and other CNN based edge detector cloning templates available in the imaging literature. The results show that the proposed method is more successful than other methods.

  6. A Novel Cloning Template Designing Method by Using an Artificial Bee Colony Algorithm for Edge Detection of CNN Based Imaging Sensors

    PubMed Central

    Parmaksızoğlu, Selami; Alçı, Mustafa

    2011-01-01

    Cellular Neural Networks (CNNs) have been widely used recently in applications such as edge detection, noise reduction and object detection, which are among the main computer imaging processes. They can also be realized as hardware based imaging sensors. The fact that hardware CNN models produce robust and effective results has attracted the attention of researchers using these structures within image sensors. Realization of desired CNN behavior such as edge detection can be achieved by correctly setting a cloning template without changing the structure of the CNN. To achieve different behaviors effectively, designing a cloning template is one of the most important research topics in this field. In this study, the edge detecting process that is used as a preliminary process for segmentation, identification and coding applications is conducted by using CNN structures. In order to design the cloning template of goal-oriented CNN architecture, an Artificial Bee Colony (ABC) algorithm which is inspired from the foraging behavior of honeybees is used and the performance analysis of ABC for this application is examined with multiple runs. The CNN template generated by the ABC algorithm is tested by using artificial and real test images. The results are subjectively and quantitatively compared with well-known classical edge detection methods, and other CNN based edge detector cloning templates available in the imaging literature. The results show that the proposed method is more successful than other methods. PMID:22163903

  7. Real-time 3D change detection of IEDs

    NASA Astrophysics Data System (ADS)

    Wathen, Mitch; Link, Norah; Iles, Peter; Jinkerson, John; Mrstik, Paul; Kusevic, Kresimir; Kovats, David

    2012-06-01

    Road-side bombs are a real and continuing threat to soldiers in theater. CAE USA recently developed a prototype Volume based Intelligence Surveillance Reconnaissance (VISR) sensor platform for IED detection. This vehicle-mounted, prototype sensor system uses a high data rate LiDAR (1.33 million range measurements per second) to generate a 3D mapping of roadways. The mapped data is used as a reference to generate real-time change detection on future trips on the same roadways. The prototype VISR system is briefly described. The focus of this paper is the methodology used to process the 3D LiDAR data, in real-time, to detect small changes on and near the roadway ahead of a vehicle traveling at moderate speeds with sufficient warning to stop the vehicle at a safe distance from the threat. The system relies on accurate navigation equipment to geo-reference the reference run and the change-detection run. Since it was recognized early in the project that detection of small changes could not be achieved with accurate navigation solutions alone, a scene alignment algorithm was developed to register the reference run with the change detection run prior to applying the change detection algorithm. Good success was achieved in simultaneous real time processing of scene alignment plus change detection.

  8. Detection of melanomas by digital imaging of spectrally resolved UV light-induced autofluorescence of human skin

    NASA Astrophysics Data System (ADS)

    Chwirot, B. W.; Chwirot, S.; Jedrzejczyk, W.; Redzinski, J.; Raczynska, A. M.; Telega, K.

    2001-07-01

    We studied spectral and spatial distributions of the intensity of the ultraviolet light-excited fluorescence of human skin. Our studied performed in situ in 162 patients with malignant and non-malignant skin lesions resulted in a new method of detecting melanomas in situ using digital imaging of the spectrally resolved fluorescence. With our diagnostic algorithm we could successfully detect 88.5% of the cases of melanoma in the group of patients subject to examinations with the fluorescence method. A patent application for the method has been submitted to the Patent Office in Warsaw.

  9. A semi-learning algorithm for noise rejection: an fNIRS study on ADHD children

    NASA Astrophysics Data System (ADS)

    Sutoko, Stephanie; Funane, Tsukasa; Katura, Takusige; Sato, Hiroki; Kiguchi, Masashi; Maki, Atsushi; Monden, Yukifumi; Nagashima, Masako; Yamagata, Takanori; Dan, Ippeita

    2017-02-01

    In pediatrics studies, the quality of functional near infrared spectroscopy (fNIRS) signals is often reduced by motion artifacts. These artifacts likely mislead brain functionality analysis, causing false discoveries. While noise correction methods and their performance have been investigated, these methods require several parameter assumptions that apparently result in noise overfitting. In contrast, the rejection of noisy signals serves as a preferable method because it maintains the originality of the signal waveform. Here, we describe a semi-learning algorithm to detect and eliminate noisy signals. The algorithm dynamically adjusts noise detection according to the predetermined noise criteria, which are spikes, unusual activation values (averaged amplitude signals within the brain activation period), and high activation variances (among trials). Criteria were sequentially organized in the algorithm and orderly assessed signals based on each criterion. By initially setting an acceptable rejection rate, particular criteria causing excessive data rejections are neglected, whereas others with tolerable rejections practically eliminate noises. fNIRS data measured during the attention response paradigm (oddball task) in children with attention deficit/hyperactivity disorder (ADHD) were utilized to evaluate and optimize the algorithm's performance. This algorithm successfully substituted the visual noise identification done in the previous studies and consistently found significantly lower activation of the right prefrontal and parietal cortices in ADHD patients than in typical developing children. Thus, we conclude that the semi-learning algorithm confers more objective and standardized judgment for noise rejection and presents a promising alternative to visual noise rejection

  10. Application of the Trend Filtering Algorithm for Photometric Time Series Data

    NASA Astrophysics Data System (ADS)

    Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.

    2016-08-01

    Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.

  11. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  12. The Least Mean Squares Adaptive FIR Filter for Narrow-Band RFI Suppression in Radio Detection of Cosmic Rays

    NASA Astrophysics Data System (ADS)

    Szadkowski, Zbigniew; Głas, Dariusz

    2017-06-01

    Radio emission from the extensive air showers (EASs), initiated by ultrahigh-energy cosmic rays, was theoretically suggested over 50 years ago. However, due to technical limitations, successful collection of sufficient statistics can take several years. Nowadays, this detection technique is used in many experiments consisting in studying EAS. One of them is the Auger Engineering Radio Array (AERA), located within the Pierre Auger Observatory. AERA focuses on the radio emission, generated by the electromagnetic part of the shower, mainly in geomagnetic and charge excess processes. The frequency band observed by AERA radio stations is 30-80 MHz. Thus, the frequency range is contaminated by human-made and narrow-band radio frequency interferences (RFIs). Suppression of contaminations is very important to lower the rate of spurious triggers. There are two kinds of digital filters used in AERA radio stations to suppress these contaminations: the fast Fourier transform median filter and four narrow-band IIR-notch filters. Both filters have worked successfully in the field for many years. An adaptive filter based on a least mean squares (LMS) algorithm is a relatively simple finite impulse response (FIR) filter, which can be an alternative for currently used filters. Simulations in MATLAB are very promising and show that the LMS filter can be very efficient in suppressing RFI and only slightly distorts radio signals. The LMS algorithm was implemented into a Cyclone V field programmable gate array for testing the stability, RFI suppression efficiency, and adaptation time to new conditions. First results show that the FIR filter based on the LMS algorithm can be successfully implemented and used in real AERA radio stations.

  13. A fuzzy measure approach to motion frame analysis for scene detection. M.S. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Leigh, Albert B.; Pal, Sankar K.

    1992-01-01

    This paper addresses a solution to the problem of scene estimation of motion video data in the fuzzy set theoretic framework. Using fuzzy image feature extractors, a new algorithm is developed to compute the change of information in each of two successive frames to classify scenes. This classification process of raw input visual data can be used to establish structure for correlation. The algorithm attempts to fulfill the need for nonlinear, frame-accurate access to video data for applications such as video editing and visual document archival/retrieval systems in multimedia environments.

  14. A Third Approach to Gene Prediction Suggests Thousands of Additional Human Transcribed Regions

    PubMed Central

    Glusman, Gustavo; Qin, Shizhen; El-Gewely, M. Raafat; Siegel, Andrew F; Roach, Jared C; Hood, Leroy; Smit, Arian F. A

    2006-01-01

    The identification and characterization of the complete ensemble of genes is a main goal of deciphering the digital information stored in the human genome. Many algorithms for computational gene prediction have been described, ultimately derived from two basic concepts: (1) modeling gene structure and (2) recognizing sequence similarity. Successful hybrid methods combining these two concepts have also been developed. We present a third orthogonal approach to gene prediction, based on detecting the genomic signatures of transcription, accumulated over evolutionary time. We discuss four algorithms based on this third concept: Greens and CHOWDER, which quantify mutational strand biases caused by transcription-coupled DNA repair, and ROAST and PASTA, which are based on strand-specific selection against polyadenylation signals. We combined these algorithms into an integrated method called FEAST, which we used to predict the location and orientation of thousands of putative transcription units not overlapping known genes. Many of the newly predicted transcriptional units do not appear to code for proteins. The new algorithms are particularly apt at detecting genes with long introns and lacking sequence conservation. They therefore complement existing gene prediction methods and will help identify functional transcripts within many apparent “genomic deserts.” PMID:16543943

  15. Optimization of Second Fault Detection Thresholds to Maximize Mission POS

    NASA Technical Reports Server (NTRS)

    Anzalone, Evan

    2018-01-01

    In order to support manned spaceflight safety requirements, the Space Launch System (SLS) has defined program-level requirements for key systems to ensure successful operation under single fault conditions. To accommodate this with regards to Navigation, the SLS utilizes an internally redundant Inertial Navigation System (INS) with built-in capability to detect, isolate, and recover from first failure conditions and still maintain adherence to performance requirements. The unit utilizes multiple hardware- and software-level techniques to enable detection, isolation, and recovery from these events in terms of its built-in Fault Detection, Isolation, and Recovery (FDIR) algorithms. Successful operation is defined in terms of sufficient navigation accuracy at insertion while operating under worst case single sensor outages (gyroscope and accelerometer faults at launch). In addition to first fault detection and recovery, the SLS program has also levied requirements relating to the capability of the INS to detect a second fault, tracking any unacceptable uncertainty in knowledge of the vehicle's state. This detection functionality is required in order to feed abort analysis and ensure crew safety. Increases in navigation state error and sensor faults can drive the vehicle outside of its operational as-designed environments and outside of its performance envelope causing loss of mission, or worse, loss of crew. The criteria for operation under second faults allows for a larger set of achievable missions in terms of potential fault conditions, due to the INS operating at the edge of its capability. As this performance is defined and controlled at the vehicle level, it allows for the use of system level margins to increase probability of mission success on the operational edges of the design space. Due to the implications of the vehicle response to abort conditions (such as a potentially failed INS), it is important to consider a wide range of failure scenarios in terms of both magnitude and time. As such, the Navigation team is taking advantage of the INS's capability to schedule and change fault detection thresholds in flight. These values are optimized along a nominal trajectory in order to maximize probability of mission success, and reducing the probability of false positives (defined as when the INS would report a second fault condition resulting in loss of mission, but the vehicle would still meet insertion requirements within system-level margins). This paper will describe an optimization approach using Genetic Algorithms to tune the threshold parameters to maximize vehicle resilience to second fault events as a function of potential fault magnitude and time of fault over an ascent mission profile. The analysis approach, and performance assessment of the results will be presented to demonstrate the applicability of this process to second fault detection to maximize mission probability of success.

  16. SU-G-JeP4-03: Anomaly Detection of Respiratory Motion by Use of Singular Spectrum Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotoku, J; Kumagai, S; Nakabayashi, S

    Purpose: The implementation and realization of automatic anomaly detection of respiratory motion is a very important technique to prevent accidental damage during radiation therapy. Here, we propose an automatic anomaly detection method using singular value decomposition analysis. Methods: The anomaly detection procedure consists of four parts:1) measurement of normal respiratory motion data of a patient2) calculation of a trajectory matrix representing normal time-series feature3) real-time monitoring and calculation of a trajectory matrix of real-time data.4) calculation of an anomaly score from the similarity of the two feature matrices. Patient motion was observed by a marker-less tracking system using a depthmore » camera. Results: Two types of motion e.g. cough and sudden stop of breathing were successfully detected in our real-time application. Conclusion: Automatic anomaly detection of respiratory motion using singular spectrum analysis was successful in the cough and sudden stop of breathing. The clinical use of this algorithm will be very hopeful. This work was supported by JSPS KAKENHI Grant Number 15K08703.« less

  17. A ZigBee-Based Location-Aware Fall Detection System for Improving Elderly Telecare

    PubMed Central

    Huang, Chih-Ning; Chan, Chia-Tai

    2014-01-01

    Falls are the primary cause of accidents among the elderly and frequently cause fatal and non-fatal injuries associated with a large amount of medical costs. Fall detection using wearable wireless sensor nodes has the potential of improving elderly telecare. This investigation proposes a ZigBee-based location-aware fall detection system for elderly telecare that provides an unobstructed communication between the elderly and caregivers when falls happen. The system is based on ZigBee-based sensor networks, and the sensor node consists of a motherboard with a tri-axial accelerometer and a ZigBee module. A wireless sensor node worn on the waist continuously detects fall events and starts an indoor positioning engine as soon as a fall happens. In the fall detection scheme, this study proposes a three-phase threshold-based fall detection algorithm to detect critical and normal falls. The fall alarm can be canceled by pressing and holding the emergency fall button only when a normal fall is detected. On the other hand, there are three phases in the indoor positioning engine: path loss survey phase, Received Signal Strength Indicator (RSSI) collection phase and location calculation phase. Finally, the location of the faller will be calculated by a k-nearest neighbor algorithm with weighted RSSI. The experimental results demonstrate that the fall detection algorithm achieves 95.63% sensitivity, 73.5% specificity, 88.62% accuracy and 88.6% precision. Furthermore, the average error distance for indoor positioning is 1.15 ± 0.54 m. The proposed system successfully delivers critical information to remote telecare providers who can then immediately help a fallen person. PMID:24743841

  18. Spatial and Temporal Varying Thresholds for Cloud Detection in Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Haines, Stephanie

    2007-01-01

    A new cloud detection technique has been developed and applied to both geostationary and polar orbiting satellite imagery having channels in the thermal infrared and short wave infrared spectral regions. The bispectral composite threshold (BCT) technique uses only the 11 micron and 3.9 micron channels, and composite imagery generated from these channels, in a four-step cloud detection procedure to produce a binary cloud mask at single pixel resolution. A unique aspect of this algorithm is the use of 20-day composites of the 11 micron and the 11 - 3.9 micron channel difference imagery to represent spatially and temporally varying clear-sky thresholds for the bispectral cloud tests. The BCT cloud detection algorithm has been applied to GOES and MODIS data over the continental United States over the last three years with good success. The resulting products have been validated against "truth" datasets (generated by the manual determination of the sky conditions from available satellite imagery) for various seasons from the 2003-2005 periods. The day and night algorithm has been shown to determine the correct sky conditions 80-90% of the time (on average) over land and ocean areas. Only a small variation in algorithm performance occurs between day-night, land-ocean, and between seasons. The algorithm performs least well. during he winter season with only 80% of the sky conditions determined correctly. The algorithm was found to under-determine clouds at night and during times of low sun angle (in geostationary satellite data) and tends to over-determine the presence of clouds during the day, particularly in the summertime. Since the spectral tests use only the short- and long-wave channels common to most multispectral scanners; the application of the BCT technique to a variety of satellite sensors including SEVERI should be straightforward and produce similar performance results.

  19. Real time coarse orientation detection in MR scans using multi-planar deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bhatia, Parmeet S.; Reda, Fitsum; Harder, Martin; Zhan, Yiqiang; Zhou, Xiang Sean

    2017-02-01

    Automatically detecting anatomy orientation is an important task in medical image analysis. Specifically, the ability to automatically detect coarse orientation of structures is useful to minimize the effort of fine/accurate orientation detection algorithms, to initialize non-rigid deformable registration algorithms or to align models to target structures in model-based segmentation algorithms. In this work, we present a deep convolution neural network (DCNN)-based method for fast and robust detection of the coarse structure orientation, i.e., the hemi-sphere where the principal axis of a structure lies. That is, our algorithm predicts whether the principal orientation of a structure is in the northern hemisphere or southern hemisphere, which we will refer to as UP and DOWN, respectively, in the remainder of this manuscript. The only assumption of our method is that the entire structure is located within the scan's field-of-view (FOV). To efficiently solve the problem in 3D space, we formulated it as a multi-planar 2D deep learning problem. In the training stage, a large number coronal-sagittal slice pairs are constructed as 2-channel images to train a DCNN to classify whether a scan is UP or DOWN. During testing, we randomly sample a small number of coronal-sagittal 2-channel images and pass them through our trained network. Finally, coarse structure orientation is determined using majority voting. We tested our method on 114 Elbow MR Scans. Experimental results suggest that only five 2-channel images are sufficient to achieve a high success rate of 97.39%. Our method is also extremely fast and takes approximately 50 milliseconds per 3D MR scan. Our method is insensitive to the location of the structure in the FOV.

  20. An automated cross-correlation based event detection technique and its application to surface passive data set

    USGS Publications Warehouse

    Forghani-Arani, Farnoush; Behura, Jyoti; Haines, Seth S.; Batzle, Mike

    2013-01-01

    In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running-window energy ratio of the short-term average to the long-term average of the passive seismic data for each trace. We show that for the common case of a low signal-to-noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross-correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal-to-noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.

  1. Landmark-Based Drift Compensation Algorithm for Inertial Pedestrian Navigation

    PubMed Central

    Munoz Diaz, Estefania; Caamano, Maria; Fuentes Sánchez, Francisco Javier

    2017-01-01

    The navigation of pedestrians based on inertial sensors, i.e., accelerometers and gyroscopes, has experienced a great growth over the last years. However, the noise of medium- and low-cost sensors causes a high error in the orientation estimation, particularly in the yaw angle. This error, called drift, is due to the bias of the z-axis gyroscope and other slow changing errors, such as temperature variations. We propose a seamless landmark-based drift compensation algorithm that only uses inertial measurements. The proposed algorithm adds a great value to the state of the art, because the vast majority of the drift elimination algorithms apply corrections to the estimated position, but not to the yaw angle estimation. Instead, the presented algorithm computes the drift value and uses it to prevent yaw errors and therefore position errors. In order to achieve this goal, a detector of landmarks, i.e., corners and stairs, and an association algorithm have been developed. The results of the experiments show that it is possible to reliably detect corners and stairs using only inertial measurements eliminating the need that the user takes any action, e.g., pressing a button. Associations between re-visited landmarks are successfully made taking into account the uncertainty of the position. After that, the drift is computed out of all associations and used during a post-processing stage to obtain a low-drifted yaw angle estimation, that leads to successfully drift compensated trajectories. The proposed algorithm has been tested with quasi-error-free turn rate measurements introducing known biases and with medium-cost gyroscopes in 3D indoor and outdoor scenarios. PMID:28671622

  2. Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments

    NASA Astrophysics Data System (ADS)

    Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.

    2008-04-01

    We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.

  3. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  4. Comparison of human and algorithmic target detection in passive infrared imagery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.; Hutchinson, Meredith

    2003-09-01

    We have designed an experiment that compares the performance of human observers and a scale-insensitive target detection algorithm that uses pixel level information for the detection of ground targets in passive infrared imagery. The test database contains targets near clutter whose detectability ranged from easy to very difficult. Results indicate that human observers detect more "easy-to-detect" targets, and with far fewer false alarms, than the algorithm. For "difficult-to-detect" targets, human and algorithm detection rates are considerably degraded, and algorithm false alarms excessive. Analysis of detections as a function of observer confidence shows that algorithm confidence attribution does not correspond to human attribution, and does not adequately correlate with correct detections. The best target detection score for any human observer was 84%, as compared to 55% for the algorithm for the same false alarm rate. At 81%, the maximum detection score for the algorithm, the same human observer had 6 false alarms per frame as compared to 29 for the algorithm. Detector ROC curves and observer-confidence analysis benchmarks the algorithm and provides insights into algorithm deficiencies and possible paths to improvement.

  5. Community detection in networks with unequal groups.

    PubMed

    Zhang, Pan; Moore, Cristopher; Newman, M E J

    2016-01-01

    Recently, a phase transition has been discovered in the network community detection problem below which no algorithm can tell which nodes belong to which communities with success any better than a random guess. This result has, however, so far been limited to the case where the communities have the same size or the same average degree. Here we consider the case where the sizes or average degrees differ. This asymmetry allows us to assign nodes to communities with better-than-random success by examining their local neighborhoods. Using the cavity method, we show that this removes the detectability transition completely for networks with four groups or fewer, while for more than four groups the transition persists up to a critical amount of asymmetry but not beyond. The critical point in the latter case coincides with the point at which local information percolates, causing a global transition from a less-accurate solution to a more-accurate one.

  6. TU-H-206-01: An Automated Approach for Identifying Geometric Distortions in Gamma Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, S; Nelson, J; Samei, E

    2016-06-15

    Purpose: To develop a clinically-deployable, automated process for detecting artifacts in routine nuclear medicine (NM) quality assurance (QA) bar phantom images. Methods: An artifact detection algorithm was created to analyze bar phantom images as part of an ongoing QA program. A low noise, high resolution reference image was acquired from an x-ray of the bar phantom with a Philips Digital Diagnost system utilizing image stitching. NM bar images, acquired for 5 million counts over a 512×512 matrix, were registered to the template image by maximizing mutual information (MI). The MI index was used as an initial test for artifacts; lowmore » values indicate an overall presence of distortions regardless of their spatial location. Images with low MI scores were further analyzed for bar linearity, periodicity, alignment, and compression to locate differences with respect to the template. Findings from each test were spatially correlated and locations failing multiple tests were flagged as potential artifacts requiring additional visual analysis. The algorithm was initially deployed for GE Discovery 670 and Infinia Hawkeye gamma cameras. Results: The algorithm successfully identified clinically relevant artifacts from both systems previously unnoticed by technologists performing the QA. Average MI indices for artifact-free images are 0.55. Images with MI indices < 0.50 have shown 100% sensitivity and specificity for artifact detection when compared with a thorough visual analysis. Correlation of geometric tests confirms the ability to spatially locate the most likely image regions containing an artifact regardless of initial phantom orientation. Conclusion: The algorithm shows the potential to detect gamma camera artifacts that may be missed by routine technologist inspections. Detection and subsequent correction of artifacts ensures maximum image quality and may help to identify failing hardware before it impacts clinical workflow. Going forward, the algorithm is being deployed to monitor data from all gamma cameras within our health system.« less

  7. Stereo Image Dense Matching by Integrating Sift and Sgm Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Song, Y.; Lu, J.

    2018-05-01

    Semi-global matching(SGM) performs the dynamic programming by treating the different path directions equally. It does not consider the impact of different path directions on cost aggregation, and with the expansion of the disparity search range, the accuracy and efficiency of the algorithm drastically decrease. This paper presents a dense matching algorithm by integrating SIFT and SGM. It takes the successful matching pairs matched by SIFT as control points to direct the path in dynamic programming with truncating error propagation. Besides, matching accuracy can be improved by using the gradient direction of the detected feature points to modify the weights of the paths in different directions. The experimental results based on Middlebury stereo data sets and CE-3 lunar data sets demonstrate that the proposed algorithm can effectively cut off the error propagation, reduce disparity search range and improve matching accuracy.

  8. Online boosting for vehicle detection.

    PubMed

    Chang, Wen-Chung; Cho, Chih-Wei

    2010-06-01

    This paper presents a real-time vision-based vehicle detection system employing an online boosting algorithm. It is an online AdaBoost approach for a cascade of strong classifiers instead of a single strong classifier. Most existing cascades of classifiers must be trained offline and cannot effectively be updated when online tuning is required. The idea is to develop a cascade of strong classifiers for vehicle detection that is capable of being online trained in response to changing traffic environments. To make the online algorithm tractable, the proposed system must efficiently tune parameters based on incoming images and up-to-date performance of each weak classifier. The proposed online boosting method can improve system adaptability and accuracy to deal with novel types of vehicles and unfamiliar environments, whereas existing offline methods rely much more on extensive training processes to reach comparable results and cannot further be updated online. Our approach has been successfully validated in real traffic environments by performing experiments with an onboard charge-coupled-device camera in a roadway vehicle.

  9. Landmine detection using two-tapped joint orthogonal matching pursuits

    NASA Astrophysics Data System (ADS)

    Goldberg, Sean; Glenn, Taylor; Wilson, Joseph N.; Gader, Paul D.

    2012-06-01

    Joint Orthogonal Matching Pursuits (JOMP) is used here in the context of landmine detection using data obtained from an electromagnetic induction (EMI) sensor. The response from an object containing metal can be decomposed into a discrete spectrum of relaxation frequencies (DSRF) from which we construct a dictionary. A greedy iterative algorithm is proposed for computing successive residuals of a signal by subtracting away the highest matching dictionary element at each step. The nal condence of a particular signal is a combination of the reciprocal of this residual and the mean of the complex component. A two-tap approach comparing signals on opposite sides of the geometric location of the sensor is examined and found to produce better classication. It is found that using only a single pursuit does a comparable job, reducing complexity and allowing for real-time implementation in automated target recognition systems. JOMP is particularly highlighted in comparison with a previous EMI detection algorithm known as String Match.

  10. Probabilistic Model for Untargeted Peak Detection in LC-MS Using Bayesian Statistics.

    PubMed

    Woldegebriel, Michael; Vivó-Truyols, Gabriel

    2015-07-21

    We introduce a novel Bayesian probabilistic peak detection algorithm for liquid chromatography-mass spectroscopy (LC-MS). The final probabilistic result allows the user to make a final decision about which points in a chromatogram are affected by a chromatographic peak and which ones are only affected by noise. The use of probabilities contrasts with the traditional method in which a binary answer is given, relying on a threshold. By contrast, with the Bayesian peak detection presented here, the values of probability can be further propagated into other preprocessing steps, which will increase (or decrease) the importance of chromatographic regions into the final results. The present work is based on the use of the statistical overlap theory of component overlap from Davis and Giddings (Davis, J. M.; Giddings, J. Anal. Chem. 1983, 55, 418-424) as prior probability in the Bayesian formulation. The algorithm was tested on LC-MS Orbitrap data and was able to successfully distinguish chemical noise from actual peaks without any data preprocessing.

  11. Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos

    NASA Astrophysics Data System (ADS)

    Juneja, Medha; Grover, Priyanka

    2013-12-01

    Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.

  12. Tracking Virus Particles in Fluorescence Microscopy Images Using Multi-Scale Detection and Multi-Frame Association.

    PubMed

    Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl

    2015-11-01

    Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.

  13. L-split marker for augmented reality in aircraft assembly

    NASA Astrophysics Data System (ADS)

    Han, Pengfei; Zhao, Gang

    2016-04-01

    In order to improve the performance of conventional square markers widely used by marker-based augmented reality systems in aircraft assembly environments, an L-split marker is proposed. Every marker consists of four separate L-shaped parts and each of them contains partial information about the marker. Geometric features of the L-shape, which are more discriminate than the symmetrical square shape adopted by conventional markers, are used to detect proposed markers from the camera images effectively. The marker is split into four separate parts in order to improve the robustness to occlusion and curvature to some extent. The registration process can be successfully completed as long as three parts are detected (up to about 80% of the area could be occluded). Moreover, when we attach the marker on nonplanar surfaces, the curvature status of the marker can be roughly analyzed with every part's normal direction, which can be obtained since their six corners have been explicitly determined in the previous detection process. And based on the marker design, new detection and recognition algorithms are proposed and detailed. The experimental results show that the marker and the algorithms are effective.

  14. Wavelet subspace decomposition of thermal infrared images for defect detection in artworks

    NASA Astrophysics Data System (ADS)

    Ahmad, M. Z.; Khan, A. A.; Mezghani, S.; Perrin, E.; Mouhoubi, K.; Bodnar, J. L.; Vrabie, V.

    2016-07-01

    Health of ancient artworks must be routinely monitored for their adequate preservation. Faults in these artworks may develop over time and must be identified as precisely as possible. The classical acoustic testing techniques, being invasive, risk causing permanent damage during periodic inspections. Infrared thermometry offers a promising solution to map faults in artworks. It involves heating the artwork and recording its thermal response using infrared camera. A novel strategy based on pseudo-random binary excitation principle is used in this work to suppress the risks associated with prolonged heating. The objective of this work is to develop an automatic scheme for detecting faults in the captured images. An efficient scheme based on wavelet based subspace decomposition is developed which favors identification of, the otherwise invisible, weaker faults. Two major problems addressed in this work are the selection of the optimal wavelet basis and the subspace level selection. A novel criterion based on regional mutual information is proposed for the latter. The approach is successfully tested on a laboratory based sample as well as real artworks. A new contrast enhancement metric is developed to demonstrate the quantitative efficiency of the algorithm. The algorithm is successfully deployed for both laboratory based and real artworks.

  15. NEXUS: tracing the cosmic web connection

    NASA Astrophysics Data System (ADS)

    Cautun, Marius; van de Weygaert, Rien; Jones, Bernard J. T.

    2013-02-01

    We introduce the NEXUS algorithm for the identification of cosmic web environments: clusters, filaments, walls and voids. This is a multiscale and automatic morphological analysis tool that identifies all the cosmic structures in a scale free way, without preference for a certain size or shape. We develop the NEXUS method to incorporate the density, tidal field, velocity divergence and velocity shear as tracers of the cosmic web. We also present the NEXUS+ procedure which, taking advantage of a novel filtering of the density in logarithmic space, is very successful at identifying the filament and wall environments in a robust and natural way. To assess the algorithms we apply them to an N-body simulation. We find that all methods correctly identify the most prominent filaments and walls, while there are differences in the detection of the more tenuous structures. In general, the structures traced by the density and tidal fields are clumpier and more rugged than those present in the velocity divergence and velocity shear fields. We find that the NEXUS+ method captures much better the filamentary and wall networks and is successful in detecting even the fainter structures. We also confirm the efficiency of our methods by examining the dark matter particle and halo distributions.

  16. A post-processing algorithm for time domain pitch trackers

    NASA Astrophysics Data System (ADS)

    Specker, P.

    1983-01-01

    This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.

  17. Ground settlement monitoring from temporarily persistent scatterers between two SAR acquisitions

    USGS Publications Warehouse

    Lei, Z.; Xiaoli, D.; Guangcai, F.; Zhong, L.

    2009-01-01

    We present an improved differential interferometric synthetic aperture radar (DInSAR) analysis method that measures motions of scatterers whose phases are stable between two SAR acquisitions. Such scatterers are referred to as temporarily persistent scatterers (TPS) for simplicity. Unlike the persistent scatterer InSAR (PS-InSAR) method that relies on a time-series of interferograms, the new algorithm needs only one interferogram. TPS are identified based on pixel offsets between two SAR images, and are specially coregistered based on their estimated offsets instead of a global polynomial for the whole image. Phase unwrapping is carried out based on an algorithm for sparse data points. The method is successfully applied to measure the settlement in the Hong Kong Airport area. The buildings surrounded by vegetation were successfully selected as TPS and the tiny deformation signal over the area was detected. ??2009 IEEE.

  18. Cryo-balloon catheter localization in fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Kurzendorfer, Tanja; Brost, Alexander; Jakob, Carolin; Mewes, Philip W.; Bourier, Felix; Koch, Martin; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert

    2013-03-01

    Minimally invasive catheter ablation has become the preferred treatment option for atrial fibrillation. Although the standard ablation procedure involves ablation points set by radio-frequency catheters, cryo-balloon catheters have even been reported to be more advantageous in certain cases. As electro-anatomical mapping systems do not support cryo-balloon ablation procedures, X-ray guidance is needed. However, current methods to provide support for cryo-balloon catheters in fluoroscopically guided ablation procedures rely heavily on manual user interaction. To improve this, we propose a first method for automatic cryo-balloon catheter localization in fluoroscopic images based on a blob detection algorithm. Our method is evaluated on 24 clinical images from 17 patients. The method successfully detected the cryoballoon in 22 out of 24 images, yielding a success rate of 91.6 %. The successful localization achieved an accuracy of 1.00 mm +/- 0.44 mm. Even though our methods currently fails in 8.4 % of the images available, it still offers a significant improvement over manual methods. Furthermore, detecting a landmark point along the cryo-balloon catheter can be a very important step for additional post-processing operations.

  19. Wave detection in acceleration plethysmogram.

    PubMed

    Ahn, Jae Mok

    2015-04-01

    Acceleration plethysmogram (APG) obtained from the second derivative of photoplethysmography (PPG) is used to predict risk factors for atherosclerosis with age. This technique is promising for early screening of atherosclerotic pathologies. However, extraction of the wave indices of APG signals measured from the fingertip is challenging. In this paper, the development of a wave detection algorithm including a preamplifier based on a microcontroller that can detect the a, b, c, and d wave indices is proposed. The 4(th) order derivative of a PPG under real measurements of an APG waveform was introduced to clearly separate the components of the waveform, and to improve the rate of successful wave detection. A preamplifier with a Sallen-Key low pass filter and a wave detection algorithm with programmable gain control, mathematical differentials, and a digital IIR notch filter were designed. The frequency response of the digital IIR filter was evaluated, and a pulse train consisting of a specific area in which the wave indices existed was generated. The programmable gain control maintained a constant APG amplitude at the output for varying PPG amplitudes. For 164 subjects, the mean values and standard deviation of the a wave index corresponding to the magnitude of the APG signal were 1,106.45 and ±47.75, respectively. We conclude that the proposed algorithm and preamplifier designed to extract the wave indices of an APG in real-time are useful for evaluating vascular aging in the cardiovascular system in a simple healthcare device.

  20. A new statistical PCA-ICA algorithm for location of R-peaks in ECG.

    PubMed

    Chawla, M P S; Verma, H K; Kumar, Vinod

    2008-09-16

    The success of ICA to separate the independent components from the mixture depends on the properties of the electrocardiogram (ECG) recordings. This paper discusses some of the conditions of independent component analysis (ICA) that could affect the reliability of the separation and evaluation of issues related to the properties of the signals and number of sources. Principal component analysis (PCA) scatter plots are plotted to indicate the diagnostic features in the presence and absence of base-line wander in interpreting the ECG signals. In this analysis, a newly developed statistical algorithm by authors, based on the use of combined PCA-ICA for two correlated channels of 12-channel ECG data is proposed. ICA technique has been successfully implemented in identifying and removal of noise and artifacts from ECG signals. Cleaned ECG signals are obtained using statistical measures like kurtosis and variance of variance after ICA processing. This analysis also paper deals with the detection of QRS complexes in electrocardiograms using combined PCA-ICA algorithm. The efficacy of the combined PCA-ICA algorithm lies in the fact that the location of the R-peaks is bounded from above and below by the location of the cross-over points, hence none of the peaks are ignored or missed.

  1. Infrared small target detection with kernel Fukunaga Koontz transform

    NASA Astrophysics Data System (ADS)

    Liu, Rui-ming; Liu, Er-qi; Yang, Jie; Zhang, Tian-hao; Wang, Fang-lin

    2007-09-01

    The Fukunaga-Koontz transform (FKT) has been proposed for many years. It can be used to solve two-pattern classification problems successfully. However, there are few researchers who have definitely extended FKT to kernel FKT (KFKT). In this paper, we first complete this task. Then a method based on KFKT is developed to detect infrared small targets. KFKT is a supervised learning algorithm. How to construct training sets is very important. For automatically detecting targets, the synthetic target images and real background images are used to train KFKT. Because KFKT can represent the higher order statistical properties of images, we expect better detection performance of KFKT than that of FKT. The well-devised experiments verify that KFKT outperforms FKT in detecting infrared small targets.

  2. A stereo-vision hazard-detection algorithm to increase planetary lander autonomy

    NASA Astrophysics Data System (ADS)

    Woicke, Svenja; Mooij, Erwin

    2016-05-01

    For future landings on any celestial body, increasing the lander autonomy as well as decreasing risk are primary objectives. Both risk reduction and an increase in autonomy can be achieved by including hazard detection and avoidance in the guidance, navigation, and control loop. One of the main challenges in hazard detection and avoidance is the reconstruction of accurate elevation models, as well as slope and roughness maps. Multiple methods for acquiring the inputs for hazard maps are available. The main distinction can be made between active and passive methods. Passive methods (cameras) have budgetary advantages compared to active sensors (radar, light detection and ranging). However, it is necessary to proof that these methods deliver sufficiently good maps. Therefore, this paper discusses hazard detection using stereo vision. To facilitate a successful landing not more than 1% wrong detections (hazards that are not identified) are allowed. Based on a sensitivity analysis it was found that using a stereo set-up at a baseline of ≤ 2 m is feasible at altitudes of ≤ 200 m defining false positives of less than 1%. It was thus shown that stereo-based hazard detection is an effective means to decrease the landing risk and increase the lander autonomy. In conclusion, the proposed algorithm is a promising candidate for future landers.

  3. Artificial intelligence and signal processing for infrastructure assessment

    NASA Astrophysics Data System (ADS)

    Assaleh, Khaled; Shanableh, Tamer; Yehia, Sherif

    2015-04-01

    The Ground Penetrating Radar (GPR) is being recognized as an effective nondestructive evaluation technique to improve the inspection process. However, data interpretation and complexity of the results impose some limitations on the practicality of using this technique. This is mainly due to the need of a trained experienced person to interpret images obtained by the GPR system. In this paper, an algorithm to classify and assess the condition of infrastructures utilizing image processing and pattern recognition techniques is discussed. Features extracted form a dataset of images of defected and healthy slabs are used to train a computer vision based system while another dataset is used to evaluate the proposed algorithm. Initial results show that the proposed algorithm is able to detect the existence of defects with about 77% success rate.

  4. Astrophysical data mining with GPU. A case study: Genetic classification of globular clusters

    NASA Astrophysics Data System (ADS)

    Cavuoti, S.; Garofalo, M.; Brescia, M.; Paolillo, M.; Pescape', A.; Longo, G.; Ventre, G.

    2014-01-01

    We present a multi-purpose genetic algorithm, designed and implemented with GPGPU/CUDA parallel computing technology. The model was derived from our CPU serial implementation, named GAME (Genetic Algorithm Model Experiment). It was successfully tested and validated on the detection of candidate Globular Clusters in deep, wide-field, single band HST images. The GPU version of GAME will be made available to the community by integrating it into the web application DAMEWARE (DAta Mining Web Application REsource, http://dame.dsf.unina.it/beta_info.html), a public data mining service specialized on massive astrophysical data. Since genetic algorithms are inherently parallel, the GPGPU computing paradigm leads to a speedup of a factor of 200× in the training phase with respect to the CPU based version.

  5. Unmanned Aerial Vehicles for Alien Plant Species Detection and Monitoring

    NASA Astrophysics Data System (ADS)

    Dvořák, P.; Müllerová, J.; Bartaloš, T.; Brůna, J.

    2015-08-01

    Invasive species spread rapidly and their eradication is difficult. New methods enabling fast and efficient monitoring are urgently needed for their successful control. Remote sensing can improve early detection of invading plants and make their management more efficient and less expensive. In an ongoing project in the Czech Republic, we aim at developing innovative methods of mapping invasive plant species (semi-automatic detection algorithms) by using purposely designed unmanned aircraft (UAV). We examine possibilities for detection of two tree and two herb invasive species. Our aim is to establish fast, repeatable and efficient computer-assisted method of timely monitoring, reducing the costs of extensive field campaigns. For finding the best detection algorithm we test various classification approaches (object-, pixel-based and hybrid). Thanks to its flexibility and low cost, UAV enables assessing the effect of phenological stage and spatial resolution, and is most suitable for monitoring the efficiency of eradication efforts. However, several challenges exist in UAV application, such as geometrical and radiometric distortions, high amount of data to be processed and legal constrains for the UAV flight missions over urban areas (often highly invaded). The newly proposed UAV approach shall serve invasive species researchers, management practitioners and policy makers.

  6. Improving HVAC operational efficiency in small-and medium-size commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert

    Small- and medium-size (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring, or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically use packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the United States for many reasons, chief among them being to mitigate themore » climate change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short cycling, when an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and to premature failure of the compressor or its components. Also, short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this paper describes two algorithms for detecting the zone set point temperature and RTU cycling rate that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using field data from a number of RTUs from six buildings in different climate locations. Overall, the algorithms were successful in detecting the set points and ON/OFF cycles accurately using the peak detection technique. The paper describes the two algorithms, results from testing the algorithms using field data, how the algorithms can be used to improve SMBs efficiency, and presents related conclusions.« less

  7. A segmentation/clustering model for the analysis of array CGH data.

    PubMed

    Picard, F; Robin, S; Lebarbier, E; Daudin, J-J

    2007-09-01

    Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.

  8. Detection of nicotine content impact in tobacco manufacturing using computational intelligence.

    PubMed

    Begic Fazlic, Lejla; Avdagic, Zikrija

    2011-01-01

    A study is presented for the detection of nicotine impact in different cigarette type, using recorded data and Computational Intelligence techniques. Recorded puffs are processed using Continuous Wavelet Transform and used to extract time-frequency features for normal and abnormal puffs conditions. The wavelet energy distributions are used as inputs to classifiers based on Adaptive Neuro-Fuzzy Inference Systems (ANFIS) and Genetic Algorithms (GAs). The number and the parameters of Membership Functions are used in ANFIS along with the features from wavelet energy distributionare selected using GAs, maximising the diagnosis success. GA with ANFIS (GANFIS) are trained with a subset of data with known nicotine conditions. The trained GANFIS are tested using the other set of data (testing data). A classical method by High-Performance Liquid Chromatography is also introduced to solve this problem, respectively. The results as well as the performances of these two approaches are compared. A combination of these two algorithms is also suggested to improve the efficiency of this solution procedure. Computational results show that this combined algorithm is promising.

  9. Real-time multi-channel stimulus artifact suppression by local curve fitting.

    PubMed

    Wagenaar, Daniel A; Potter, Steve M

    2002-10-30

    We describe an algorithm for suppression of stimulation artifacts in extracellular micro-electrode array (MEA) recordings. A model of the artifact based on locally fitted cubic polynomials is subtracted from the recording, yielding a flat baseline amenable to spike detection by voltage thresholding. The algorithm, SALPA, reduces the period after stimulation during which action potentials cannot be detected by an order of magnitude, to less than 2 ms. Our implementation is fast enough to process 60-channel data sampled at 25 kHz in real-time on an inexpensive desktop PC. It performs well on a wide range of artifact shapes without re-tuning any parameters, because it accounts for amplifier saturation explicitly and uses a statistic to verify successful artifact suppression immediately after the amplifiers become operational. We demonstrate the algorithm's effectiveness on recordings from dense monolayer cultures of cortical neurons obtained from rat embryos. SALPA opens up a previously inaccessible window for studying transient neural oscillations and precisely timed dynamics in short-latency responses to electric stimulation. Copyright 2002 Elsevier Science B.V.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert G.

    Small- and medium-sized (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically utilize packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the U.S. for many reasons, chief among them is to mitigate the climatemore » change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short-cycling, where an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and lead to premature failure of the compressor or its components. The short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Also, SMBs use a time-of-day scheduling is to start the RTUs before the building will be occupied and shut it off when unoccupied. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this report describes three algorithms for detecting the zone set point temperature, RTU cycling rate and occupancy schedule detection that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using field data from a number of RTUs from six buildings in different climate locations. Overall, the algorithms were successful in detecting the set points and ON/OFF cycles accurately using the peak detection technique and occupancy schedule using symbolic aggregate approximation technique. The report describes the three algorithms, results from testing the algorithms using field data, how the algorithms can be used to improve SMBs efficiency, and presents related conclusions.« less

  11. Optimizing coherent anti-Stokes Raman scattering by genetic algorithm controlled pulse shaping

    NASA Astrophysics Data System (ADS)

    Yang, Wenlong; Sokolov, Alexei

    2010-10-01

    The hybrid coherent anti-Stokes Raman scattering (CARS) has been successful applied to fast chemical sensitive detections. As the development of femto-second pulse shaping techniques, it is of great interest to find the optimum pulse shapes for CARS. The optimum pulse shapes should minimize the non-resonant four wave mixing (NRFWM) background and maximize the CARS signal. A genetic algorithm (GA) is developed to make a heuristic searching for optimized pulse shapes, which give the best signal the background ratio. The GA is shown to be able to rediscover the hybrid CARS scheme and find optimized pulse shapes for customized applications by itself.

  12. Raman spectroscopy and imaging to detect contaminants for food safety applications

    NASA Astrophysics Data System (ADS)

    Chao, Kuanglin; Qin, Jianwei; Kim, Moon S.; Peng, Yankun; Chan, Diane; Cheng, Yu-Che

    2013-05-01

    This study presents the use of Raman chemical imaging for the screening of dry milk powder for the presence of chemical contaminants and Raman spectroscopy for quantitative assessment of chemical contaminants in liquid milk. For image-based screening, melamine was mixed into dry milk at concentrations (w/w) between 0.2% and 10.0%, and images of the mixtures were analyzed by a spectral information divergence algorithm. Ammonium sulfate, dicyandiamide, and urea were each separately mixed into dry milk at concentrations (w/w) between 0.5% and 5.0%, and an algorithm based on self-modeling mixture analysis was applied to these sample images. The contaminants were successfully detected and the spatial distribution of the contaminants within the sample mixtures was visualized using these algorithms. Liquid milk mixtures were prepared with melamine at concentrations between 0.04% and 0.30%, with ammonium sulfate and with urea at concentrations between 0.1% and 10.0%, and with dicyandiamide at concentrations between 0.1% and 4.0%. Analysis of the Raman spectra from the liquid mixtures showed linear relationships between the Raman intensities and the chemical concentrations. Although further studies are necessary, Raman chemical imaging and spectroscopy show promise for use in detecting and evaluating contaminants in food ingredients.

  13. Human emotion detector based on genetic algorithm using lip features

    NASA Astrophysics Data System (ADS)

    Brown, Terrence; Fetanat, Gholamreza; Homaifar, Abdollah; Tsou, Brian; Mendoza-Schrock, Olga

    2010-04-01

    We predicted human emotion using a Genetic Algorithm (GA) based lip feature extractor from facial images to classify all seven universal emotions of fear, happiness, dislike, surprise, anger, sadness and neutrality. First, we isolated the mouth from the input images using special methods, such as Region of Interest (ROI) acquisition, grayscaling, histogram equalization, filtering, and edge detection. Next, the GA determined the optimal or near optimal ellipse parameters that circumvent and separate the mouth into upper and lower lips. The two ellipses then went through fitness calculation and were followed by training using a database of Japanese women's faces expressing all seven emotions. Finally, our proposed algorithm was tested using a published database consisting of emotions from several persons. The final results were then presented in confusion matrices. Our results showed an accuracy that varies from 20% to 60% for each of the seven emotions. The errors were mainly due to inaccuracies in the classification, and also due to the different expressions in the given emotion database. Detailed analysis of these errors pointed to the limitation of detecting emotion based on the lip features alone. Similar work [1] has been done in the literature for emotion detection in only one person, we have successfully extended our GA based solution to include several subjects.

  14. Detection theory applied to high intensity focused ultrasound (HIFU) treatment evaluation

    NASA Astrophysics Data System (ADS)

    Sanghvi, Narendra; Wunderlich, Adam; Seip, Ralf; Tavakkoli, Jahangir; Dines, Kris; Baily, Michael; Crum, Lawrence

    2003-04-01

    The aim of this work is to develop a HIFU treatment evaluation algorithm based on 1-D pulse/echo (P/E) ultrasound data taken during HIFU exposures. The algorithm is applicable to large treatment volumes resulting from several overlapping elementary exposures. Treatments consisted of multiple HIFU exposures with an on-time of 3 seconds each, spaced 3 mm apart, and an off-time of 6 seconds in between HIFU exposures. The HIFU was paused for approximately 70 milliseconds every 0.5 seconds, while P/E data was acquired along the beam axis, using a confocal imaging transducer. Data was collected from multiple in vitro and in vivo tissue treatments, including shams. The cumulative energy change in the P/E data was found for every HIFU exposure, as a function of depth. Subsequently, a likelihood ratio test with a fixed false alarm rate was used to derive a positive or negative lesion creation decision for that position. For false alarm rates less than 5%, positive treatment outcomes were consistently detected for better than 90% of the HIFU exposures. In addition, the algorithm outcome correlated to the applied HIFU intensity level. Lesion formation was therefore successfully detected as a function of dosage. [Work supported by NIH SBIR Grant 2 R 44 CA 83244-02.

  15. Hail detection algorithm for the Global Precipitation Measuring mission core satellite sensors

    NASA Astrophysics Data System (ADS)

    Mroz, Kamil; Battaglia, Alessandro; Lang, Timothy J.; Tanelli, Simone; Cecil, Daniel J.; Tridon, Frederic

    2017-04-01

    By exploiting an abundant number of extreme storms observed simultaneously by the Global Precipitation Measurement (GPM) mission core satellite's suite of sensors and by the ground-based S-band Next-Generation Radar (NEXRAD) network over continental US, proxies for the identification of hail are developed based on the GPM core satellite observables. The full capabilities of the GPM observatory are tested by analyzing more than twenty observables and adopting the hydrometeor classification based on ground-based polarimetric measurements as truth. The proxies have been tested using the Critical Success Index (CSI) as a verification measure. The hail detection algorithm based on the mean Ku reflectivity in the mixed-phase layer performs the best, out of all considered proxies (CSI of 45%). Outside the Dual frequency Precipitation Radar (DPR) swath, the Polarization Corrected Temperature at 18.7 GHz shows the greatest potential for hail detection among all GMI channels (CSI of 26% at a threshold value of 261 K). When dual variable proxies are considered, the combination involving the mixed-phase reflectivity values at both Ku and Ka-bands outperforms all the other proxies, with a CSI of 49%. The best-performing radar-radiometer algorithm is based on the mixed-phase reflectivity at Ku-band and on the brightness temperature (TB) at 10.7 GHz (CSI of 46%). When only radiometric data are available, the algorithm based on the TBs at 36.6 and 166 GHz is the most efficient, with a CSI of 27.5%.

  16. A Robust Zero-Watermarking Algorithm for Audio

    NASA Astrophysics Data System (ADS)

    Chen, Ning; Zhu, Jie

    2007-12-01

    In traditional watermarking algorithms, the insertion of watermark into the host signal inevitably introduces some perceptible quality degradation. Another problem is the inherent conflict between imperceptibility and robustness. Zero-watermarking technique can solve these problems successfully. Instead of embedding watermark, the zero-watermarking technique extracts some essential characteristics from the host signal and uses them for watermark detection. However, most of the available zero-watermarking schemes are designed for still image and their robustness is not satisfactory. In this paper, an efficient and robust zero-watermarking technique for audio signal is presented. The multiresolution characteristic of discrete wavelet transform (DWT), the energy compression characteristic of discrete cosine transform (DCT), and the Gaussian noise suppression property of higher-order cumulant are combined to extract essential features from the host audio signal and they are then used for watermark recovery. Simulation results demonstrate the effectiveness of our scheme in terms of inaudibility, detection reliability, and robustness.

  17. Echogenicity based approach to detect, segment and track the common carotid artery in 2D ultrasound images.

    PubMed

    Narayan, Nikhil S; Marziliano, Pina

    2015-08-01

    Automatic detection and segmentation of the common carotid artery in transverse ultrasound (US) images of the thyroid gland play a vital role in the success of US guided intervention procedures. We propose in this paper a novel method to accurately detect, segment and track the carotid in 2D and 2D+t US images of the thyroid gland using concepts based on tissue echogenicity and ultrasound image formation. We first segment the hypoechoic anatomical regions of interest using local phase and energy in the input image. We then make use of a Hessian based blob like analysis to detect the carotid within the segmented hypoechoic regions. The carotid artery is segmented by making use of least squares ellipse fit for the edge points around the detected carotid candidate. Experiments performed on a multivendor dataset of 41 images show that the proposed algorithm can segment the carotid artery with high sensitivity (99.6 ±m 0.2%) and specificity (92.9 ±m 0.1%). Further experiments on a public database containing 971 images of the carotid artery showed that the proposed algorithm can achieve a detection accuracy of 95.2% with a 2% increase in performance when compared to the state-of-the-art method.

  18. Noise-robust speech triage.

    PubMed

    Bartos, Anthony L; Cipr, Tomas; Nelson, Douglas J; Schwarz, Petr; Banowetz, John; Jerabek, Ladislav

    2018-04-01

    A method is presented in which conventional speech algorithms are applied, with no modifications, to improve their performance in extremely noisy environments. It has been demonstrated that, for eigen-channel algorithms, pre-training multiple speaker identification (SID) models at a lattice of signal-to-noise-ratio (SNR) levels and then performing SID using the appropriate SNR dependent model was successful in mitigating noise at all SNR levels. In those tests, it was found that SID performance was optimized when the SNR of the testing and training data were close or identical. In this current effort multiple i-vector algorithms were used, greatly improving both processing throughput and equal error rate classification accuracy. Using identical approaches in the same noisy environment, performance of SID, language identification, gender identification, and diarization were significantly improved. A critical factor in this improvement is speech activity detection (SAD) that performs reliably in extremely noisy environments, where the speech itself is barely audible. To optimize SAD operation at all SNR levels, two algorithms were employed. The first maximized detection probability at low levels (-10 dB ≤ SNR < +10 dB) using just the voiced speech envelope, and the second exploited features extracted from the original speech to improve overall accuracy at higher quality levels (SNR ≥ +10 dB).

  19. Adaptive and accelerated tracking-learning-detection

    NASA Astrophysics Data System (ADS)

    Guo, Pengyu; Li, Xin; Ding, Shaowen; Tian, Zunhua; Zhang, Xiaohu

    2013-08-01

    An improved online long-term visual tracking algorithm, named adaptive and accelerated TLD (AA-TLD) based on Tracking-Learning-Detection (TLD) which is a novel tracking framework has been introduced in this paper. The improvement focuses on two aspects, one is adaption, which makes the algorithm not dependent on the pre-defined scanning grids by online generating scale space, and the other is efficiency, which uses not only algorithm-level acceleration like scale prediction that employs auto-regression and moving average (ARMA) model to learn the object motion to lessen the detector's searching range and the fixed number of positive and negative samples that ensures a constant retrieving time, but also CPU and GPU parallel technology to achieve hardware acceleration. In addition, in order to obtain a better effect, some TLD's details are redesigned, which uses a weight including both normalized correlation coefficient and scale size to integrate results, and adjusts distance metric thresholds online. A contrastive experiment on success rate, center location error and execution time, is carried out to show a performance and efficiency upgrade over state-of-the-art TLD with partial TLD datasets and Shenzhou IX return capsule image sequences. The algorithm can be used in the field of video surveillance to meet the need of real-time video tracking.

  20. Brightness-preserving fuzzy contrast enhancement scheme for the detection and classification of diabetic retinopathy disease.

    PubMed

    Datta, Niladri Sekhar; Dutta, Himadri Sekhar; Majumder, Koushik

    2016-01-01

    The contrast enhancement of retinal image plays a vital role for the detection of microaneurysms (MAs), which are an early sign of diabetic retinopathy disease. A retinal image contrast enhancement method has been presented to improve the MA detection technique. The success rate on low-contrast noisy retinal image analysis shows the importance of the proposed method. Overall, 587 retinal input images are tested for performance analysis. The average sensitivity and specificity are obtained as 95.94% and 99.21%, respectively. The area under curve is found as 0.932 for the receiver operating characteristics analysis. The classifications of diabetic retinopathy disease are also performed here. The experimental results show that the overall MA detection method performs better than the current state-of-the-art MA detection algorithms.

  1. Detection of pneumonia using free-text radiology reports in the BioSense system.

    PubMed

    Asatryan, Armenak; Benoit, Stephen; Ma, Haobo; English, Roseanne; Elkin, Peter; Tokars, Jerome

    2011-01-01

    Near real-time disease detection using electronic data sources is a public health priority. Detecting pneumonia is particularly important because it is the manifesting disease of several bioterrorism agents as well as a complication of influenza, including avian and novel H1N1 strains. Text radiology reports are available earlier than physician diagnoses and so could be integral to rapid detection of pneumonia. We performed a pilot study to determine which keywords present in text radiology reports are most highly associated with pneumonia diagnosis. Electronic radiology text reports from 11 hospitals from February 1, 2006 through December 31, 2007 were used. We created a computerized algorithm that searched for selected keywords ("airspace disease", "consolidation", "density", "infiltrate", "opacity", and "pneumonia"), differentiated between clinical history and radiographic findings, and accounted for negations and double negations; this algorithm was tested on a sample of 350 radiology reports. We used the algorithm to study 189,246 chest radiographs, searching for the keywords and determining their association with a final International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis of pneumonia. Performance of the search algorithm in finding keywords, and association of the keywords with a pneumonia diagnosis. In the sample of 350 radiographs, the search algorithm was highly successful in identifying the selected keywords (sensitivity 98.5%, specificity 100%). Analysis of the 189,246 radiographs showed that the keyword "pneumonia" was the strongest predictor of an ICD-9-CM diagnosis of pneumonia (adjusted odds ratio 11.8) while "density" was the weakest (adjusted odds ratio 1.5). In general, the most highly associated keyword present in the report, regardless of whether a less highly associated keyword was also present, was the best predictor of a diagnosis of pneumonia. Empirical methods may assist in finding radiology report keywords that are most highly predictive of a pneumonia diagnosis. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  2. Range resolution improvement in passive bistatic radars using nested FM channels and least squares approach

    NASA Astrophysics Data System (ADS)

    Arslan, Musa T.; Tofighi, Mohammad; Sevimli, Rasim A.; ćetin, Ahmet E.

    2015-05-01

    One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.

  3. GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.

    PubMed

    Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim

    2016-08-01

    In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.

  4. A Novel Arc Fault Detector for Early Detection of Electrical Fires

    PubMed Central

    Yang, Kai; Zhang, Rencheng; Yang, Jianhong; Liu, Canhua; Chen, Shouhong; Zhang, Fujiang

    2016-01-01

    Arc faults can produce very high temperatures and can easily ignite combustible materials; thus, they represent one of the most important causes of electrical fires. The application of arc fault detection, as an emerging early fire detection technology, is required by the National Electrical Code to reduce the occurrence of electrical fires. However, the concealment, randomness and diversity of arc faults make them difficult to detect. To improve the accuracy of arc fault detection, a novel arc fault detector (AFD) is developed in this study. First, an experimental arc fault platform is built to study electrical fires. A high-frequency transducer and a current transducer are used to measure typical load signals of arc faults and normal states. After the common features of these signals are studied, high-frequency energy and current variations are extracted as an input eigenvector for use by an arc fault detection algorithm. Then, the detection algorithm based on a weighted least squares support vector machine is designed and successfully applied in a microprocessor. Finally, an AFD is developed. The test results show that the AFD can detect arc faults in a timely manner and interrupt the circuit power supply before electrical fires can occur. The AFD is not influenced by cross talk or transient processes, and the detection accuracy is very high. Hence, the AFD can be installed in low-voltage circuits to monitor circuit states in real-time to facilitate the early detection of electrical fires. PMID:27070618

  5. Soft-Fault Detection Technologies Developed for Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2004-01-01

    The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.

  6. Benchmarking Commercial Conformer Ensemble Generators.

    PubMed

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  7. Target Transformation Constrained Sparse Unmixing (ttcsu) Algorithm for Retrieving Hydrous Minerals on Mars: Application to Southwest Melas Chasma

    NASA Astrophysics Data System (ADS)

    Lin, H.; Zhang, X.; Wu, X.; Tarnas, J. D.; Mustard, J. F.

    2018-04-01

    Quantitative analysis of hydrated minerals from hyperspectral remote sensing data is fundamental for understanding Martian geologic process. Because of the difficulties for selecting endmembers from hyperspectral images, a sparse unmixing algorithm has been proposed to be applied to CRISM data on Mars. However, it's challenge when the endmember library increases dramatically. Here, we proposed a new methodology termed Target Transformation Constrained Sparse Unmixing (TTCSU) to accurately detect hydrous minerals on Mars. A new version of target transformation technique proposed in our recent work was used to obtain the potential detections from CRISM data. Sparse unmixing constrained with these detections as prior information was applied to CRISM single-scattering albedo images, which were calculated using a Hapke radiative transfer model. This methodology increases success rate of the automatic endmember selection of sparse unmixing and could get more accurate abundances. CRISM images with well analyzed in Southwest Melas Chasma was used to validate our methodology in this study. The sulfates jarosite was detected from Southwest Melas Chasma, the distribution is consistent with previous work and the abundance is comparable. More validations will be done in our future work.

  8. A real-time method for autonomous passive acoustic detection-classification of humpback whales.

    PubMed

    Abbot, Ted A; Premus, Vincent E; Abbot, Philip A

    2010-05-01

    This paper describes a method for real-time, autonomous, joint detection-classification of humpback whale vocalizations. The approach adapts the spectrogram correlation method used by Mellinger and Clark [J. Acoust. Soc. Am. 107, 3518-3529 (2000)] for bowhead whale endnote detection to the humpback whale problem. The objective is the implementation of a system to determine the presence or absence of humpback whales with passive acoustic methods and to perform this classification with low false alarm rate in real time. Multiple correlation kernels are used due to the diversity of humpback song. The approach also takes advantage of the fact that humpbacks tend to vocalize repeatedly for extended periods of time, and identification is declared only when multiple song units are detected within a fixed time interval. Humpback whale vocalizations from Alaska, Hawaii, and Stellwagen Bank were used to train the algorithm. It was then tested on independent data obtained off Kaena Point, Hawaii in February and March of 2009. Results show that the algorithm successfully classified humpback whales autonomously in real time, with a measured probability of correct classification in excess of 74% and a measured probability of false alarm below 1%.

  9. Fault detection and isolation of high temperature proton exchange membrane fuel cell stack under the influence of degradation

    NASA Astrophysics Data System (ADS)

    Jeppesen, Christian; Araya, Samuel Simon; Sahlin, Simon Lennart; Thomas, Sobi; Andreasen, Søren Juhl; Kær, Søren Knudsen

    2017-08-01

    This study proposes a data-drive impedance-based methodology for fault detection and isolation of low and high cathode stoichiometry, high CO concentration in the anode gas, high methanol vapour concentrations in the anode gas and low anode stoichiometry, for high temperature PEM fuel cells. The fault detection and isolation algorithm is based on an artificial neural network classifier, which uses three extracted features as input. Two of the proposed features are based on angles in the impedance spectrum, and are therefore relative to specific points, and shown to be independent of degradation, contrary to other available feature extraction methods in the literature. The experimental data is based on a 35 day experiment, where 2010 unique electrochemical impedance spectroscopy measurements were recorded. The test of the algorithm resulted in a good detectability of the faults, except for high methanol vapour concentration in the anode gas fault, which was found to be difficult to distinguish from a normal operational data. The achieved accuracy for faults related to CO pollution, anode- and cathode stoichiometry is 100% success rate. Overall global accuracy on the test data is 94.6%.

  10. Diagnostic use of computational retrotransposon detection: Successful definition of pathogenetic mechanism in a ciliopathy phenotype.

    PubMed

    Takenouchi, Toshiki; Kuchikata, Tomu; Yoshihashi, Hiroshi; Fujiwara, Mineko; Uehara, Tomoko; Miyama, Sahoko; Yamada, Shiro; Kosaki, Kenjiro

    2017-05-01

    Among more than 5,000 human monogenic disorders with known causative genes, transposable element insertion of a Long Interspersed Nuclear Element 1 (LINE1, L1) is known as the mechanistic basis in only 13 genetic conditions. Meckel-Gruber syndrome is a rare ciliopathy characterized by occipital encephalocele and cystic kidney disease. Here, we document a boy with occipital encephalocele, post-axial polydactyly, and multicystic renal disease. A medical exome analysis detected a heterozygous frameshift mutation, c.4582_4583delCG p.(Arg1528Serfs*17) in CC2D2A in the maternally derived allele. The further use of a dedicated bioinformatics algorithm for detecting retrotransposon insertions led to the detection of an L1 insertion affecting exon 7 in the paternally derived allele. The complete sequencing and sequence homology analysis of the inserted L1 element showed that the L1 element was classified as L1HS (L1 human specific) and that the element had intact open reading frames in the two L1-encoded proteins. This observation ranks Meckel-Gruber syndrome as only the 14th disorder to be caused by an L1 insertion among more than 5,000 known human genetic disorders. Although a transposable element detection algorithm is not included in the current best-practice next-generation sequencing analysis, the present observation illustrates the utility of such an algorithm, which would require modest computational time and resources. Whether the seemingly infrequent recognition of L1 insertion in the pathogenesis of human genetic diseases might simply reflect a lack of appropriate detection methods remains to be seen. © 2017 Wiley Periodicals, Inc.

  11. Stochastic model search with binary outcomes for genome-wide association studies.

    PubMed

    Russu, Alberto; Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo

    2012-06-01

    The spread of case-control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model.

  12. Detection of gait characteristics for scene registration in video surveillance system.

    PubMed

    Havasi, László; Szlávik, Zoltán; Szirányi, Tamás

    2007-02-01

    This paper presents a robust walk-detection algorithm, based on our symmetry approach which can be used to extract gait characteristics from video-image sequences. To obtain a useful descriptor of a walking person, we temporally track the symmetries of a person's legs. Our method is suitable for use in indoor or outdoor surveillance scenes. Determining the leading leg of the walking subject is important, and the presented method can identify this from two successive walk steps (one walk cycle). We tested the accuracy of the presented walk-detection method in a possible application: Image registration methods are presented which are applicable to multicamera systems viewing human subjects in motion.

  13. Automated Detection of Optic Disc in Fundus Images

    NASA Astrophysics Data System (ADS)

    Burman, R.; Almazroa, A.; Raahemifar, K.; Lakshminarayanan, V.

    Optic disc (OD) localization is an important preprocessing step in the automated image detection of fundus image infected with glaucoma. An Interval Type-II fuzzy entropy based thresholding scheme along with Differential Evolution (DE) is applied to determine the location of the OD in the right of left eye retinal fundus image. The algorithm, when applied to 460 fundus images from the MESSIDOR dataset, shows a success rate of 99.07 % for 217 normal images and 95.47 % for 243 pathological images. The mean computational time is 1.709 s for normal images and 1.753 s for pathological images. These results are important for automated detection of glaucoma and for telemedicine purposes.

  14. Point coordinates extraction from localized hyperbolic reflections in GPR data

    NASA Astrophysics Data System (ADS)

    Ristić, Aleksandar; Bugarinović, Željko; Vrtunski, Milan; Govedarica, Miro

    2017-09-01

    In this paper, we propose an automated detection algorithm for the localization of apexes and points on the prongs of hyperbolic reflection incurred as a result of GPR scanning technology. The objects of interest encompass cylindrical underground utilities that have a distinctive form of hyperbolic reflection in radargram. Algorithm involves application of trained neural network to analyze radargram in the form of raster image, resulting with extracted segments of interest that contain hyperbolic reflections. This significantly reduces the amount of data for further analysis. Extracted segments represent the zone for localization of apices. This is followed by extraction of points on prongs of hyperbolic reflections which is carried out until stopping criterion is satisfied, regardless the borders of segment of interest. In final step a classification of false hyperbolic reflections caused by the constructive interference and their removal is done. The algorithm is implemented in MATLAB environment. There are several advantages of the proposed algorithm. It can successfully recognize true hyperbolic reflections in radargram images and extracts coordinates, with very low rate of false detections and without prior knowledge about the number of hyperbolic reflections or buried utilities. It can be applied to radargrams containing single and multiple hyperbolic reflections, intersected, distorted, as well as incomplete (asymmetric) hyperbolic reflections, all in the presence of higher level of noise. Special feature of algorithm is developed procedure for analysis and removal of false hyperbolic reflections generated by the constructive interference from reflectors associated with the utilities. Algorithm was tested on a number of synthetic and radargram acquired in the field survey. To illustrate the performances of the proposed algorithm, we present the characteristics of the algorithm through five representative radargrams obtained in real conditions. In these examples we present different acquisition scenarios by varying the number of buried objects, their disposition, size, and level of noise. Example with highest complexity was tested also as a synthetic radargram generated by gprMax. Processing time in examples with one or two hyperbolic reflections is from 0.1 to 0.3 s, while for the most complex examples it is from 2.2 to 4.9 s. In general, the obtained experimental results show that the proposed algorithm exhibits promising performances both in terms of utility detection and processing speed of the algorithm.

  15. Human Age Recognition by Electrocardiogram Signal Based on Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Dasgupta, Hirak

    2016-12-01

    The objective of this work is to make a neural network function approximation model to detect human age from the electrocardiogram (ECG) signal. The input vectors of the neural network are the Katz fractal dimension of the ECG signal, frequencies in the QRS complex, male or female (represented by numeric constant) and the average of successive R-R peak distance of a particular ECG signal. The QRS complex has been detected by short time Fourier transform algorithm. The successive R peak has been detected by, first cutting the signal into periods by auto-correlation method and then finding the absolute of the highest point in each period. The neural network used in this problem consists of two layers, with Sigmoid neuron in the input and linear neuron in the output layer. The result shows the mean of errors as -0.49, 1.03, 0.79 years and the standard deviation of errors as 1.81, 1.77, 2.70 years during training, cross validation and testing with unknown data sets, respectively.

  16. Ground Validation Assessments of GPM Core Observatory Science Requirements

    NASA Astrophysics Data System (ADS)

    Petersen, Walt; Huffman, George; Kidd, Chris; Skofronick-Jackson, Gail

    2017-04-01

    NASA Global Precipitation Measurement (GPM) Mission science requirements define specific measurement error standards for retrieved precipitation parameters such as rain rate, raindrop size distribution, and falling snow detection on instantaneous temporal scales and spatial resolutions ranging from effective instrument fields of view [FOV], to grid scales of 50 km x 50 km. Quantitative evaluation of these requirements intrinsically relies on GPM precipitation retrieval algorithm performance in myriad precipitation regimes (and hence, assumptions related to physics) and on the quality of ground-validation (GV) data being used to assess the satellite products. We will review GPM GV products, their quality, and their application to assessing GPM science requirements, interleaving measurement and precipitation physical considerations applicable to the approaches used. Core GV data products used to assess GPM satellite products include 1) two minute and 30-minute rain gauge bias-adjusted radar rain rate products and precipitation types (rain/snow) adapted/modified from the NOAA/OU multi-radar multi-sensor (MRMS) product over the continental U.S.; 2) Polarimetric radar estimates of rain rate over the ocean collected using the K-Pol radar at Kwajalein Atoll in the Marshall Islands and the Middleton Island WSR-88D radar located in the Gulf of Alaska; and 3) Multi-regime, field campaign and site-specific disdrometer-measured rain/snow size distribution (DSD), phase and fallspeed information used to derive polarimetric radar-based DSD retrievals and snow water equivalent rates (SWER) for comparison to coincident GPM-estimated DSD and precipitation rates/types, respectively. Within the limits of GV-product uncertainty we demonstrate that the GPM Core satellite meets its basic mission science requirements for a variety of precipitation regimes. For the liquid phase, we find that GPM radar-based products are particularly successful in meeting bias and random error requirements associated with retrievals of rain rate and required +/- 0.5 millimeter error bounds for mass-weighted mean drop diameter. Version-04 (V4) GMI GPROF radiometer-based rain rate products exhibit reasonable agreement with GV, but do not completely meet mission science requirements over the continental U.S. for lighter rain rates (e.g., 1 mm/hr) due to excessive random error ( 75%). Importantly, substantial corrections were made to the V4 GPROF algorithm and preliminary analysis of Version 5 (V5) rain products indicates more robust performance relative to GV. For the frozen phase and a modest GPM requirement to "demonstrate detection of snowfall", DPR products do successfully identify snowfall within the sensitivity and beam sampling limits of the DPR instrument ( 12 dBZ lower limit; lowest clutter-free bins). Similarly, the GPROF algorithm successfully "detects" falling snow and delineates it from liquid precipitation. However, the GV approach to computing falling-snow "detection" statistics is intrinsically tied to GPROF Bayesian algorithm-based thresholds of precipitation "detection" and model analysis temperature, and is not sufficiently tied to SWER. Hence we will also discuss ongoing work to establish the lower threshold SWER for "detection" using combined GV radar, gauge and disdrometer-based case studies.

  17. Computer-assisted diagnosis of melanoma.

    PubMed

    Fuller, Collin; Cellura, A Paul; Hibler, Brian P; Burris, Katy

    2016-03-01

    The computer-assisted diagnosis of melanoma is an exciting area of research where imaging techniques are combined with diagnostic algorithms in an attempt to improve detection and outcomes for patients with skin lesions suspicious for malignancy. Once an image has been acquired, it undergoes a processing pathway which includes preprocessing, enhancement, segmentation, feature extraction, feature selection, change detection, and ultimately classification. Practicality for everyday clinical use remains a vital question. A successful model must obtain results that are on par or outperform experienced dermatologists, keep costs at a minimum, be user-friendly, and be time efficient with high sensitivity and specificity. ©2015 Frontline Medical Communications.

  18. Privacy-Preserving Electrocardiogram Monitoring for Intelligent Arrhythmia Detection.

    PubMed

    Son, Junggab; Park, Juyoung; Oh, Heekuck; Bhuiyan, Md Zakirul Alam; Hur, Junbeom; Kang, Kyungtae

    2017-06-12

    Long-term electrocardiogram (ECG) monitoring, as a representative application of cyber-physical systems, facilitates the early detection of arrhythmia. A considerable number of previous studies has explored monitoring techniques and the automated analysis of sensing data. However, ensuring patient privacy or confidentiality has not been a primary concern in ECG monitoring. First, we propose an intelligent heart monitoring system, which involves a patient-worn ECG sensor (e.g., a smartphone) and a remote monitoring station, as well as a decision support server that interconnects these components. The decision support server analyzes the heart activity, using the Pan-Tompkins algorithm to detect heartbeats and a decision tree to classify them. Our system protects sensing data and user privacy, which is an essential attribute of dependability, by adopting signal scrambling and anonymous identity schemes. We also employ a public key cryptosystem to enable secure communication between the entities. Simulations using data from the MIT-BIH arrhythmia database demonstrate that our system achieves a 95.74% success rate in heartbeat detection and almost a 96.63% accuracy in heartbeat classification, while successfully preserving privacy and securing communications among the involved entities.

  19. Privacy-Preserving Electrocardiogram Monitoring for Intelligent Arrhythmia Detection †

    PubMed Central

    Son, Junggab; Park, Juyoung; Oh, Heekuck; Bhuiyan, Md Zakirul Alam; Hur, Junbeom; Kang, Kyungtae

    2017-01-01

    Long-term electrocardiogram (ECG) monitoring, as a representative application of cyber-physical systems, facilitates the early detection of arrhythmia. A considerable number of previous studies has explored monitoring techniques and the automated analysis of sensing data. However, ensuring patient privacy or confidentiality has not been a primary concern in ECG monitoring. First, we propose an intelligent heart monitoring system, which involves a patient-worn ECG sensor (e.g., a smartphone) and a remote monitoring station, as well as a decision support server that interconnects these components. The decision support server analyzes the heart activity, using the Pan–Tompkins algorithm to detect heartbeats and a decision tree to classify them. Our system protects sensing data and user privacy, which is an essential attribute of dependability, by adopting signal scrambling and anonymous identity schemes. We also employ a public key cryptosystem to enable secure communication between the entities. Simulations using data from the MIT-BIH arrhythmia database demonstrate that our system achieves a 95.74% success rate in heartbeat detection and almost a 96.63% accuracy in heartbeat classification, while successfully preserving privacy and securing communications among the involved entities. PMID:28604628

  20. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation.

    PubMed

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-05-08

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.

  1. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  2. Retinal vessel segmentation on SLO image

    PubMed Central

    Xu, Juan; Ishikawa, Hiroshi; Wollstein, Gadi; Schuman, Joel S.

    2010-01-01

    A scanning laser ophthalmoscopy (SLO) image, taken from optical coherence tomography (OCT), usually has lower global/local contrast and more noise compared to the traditional retinal photograph, which makes the vessel segmentation challenging work. A hybrid algorithm is proposed to efficiently solve these problems by fusing several designed methods, taking the advantages of each method and reducing the error measurements. The algorithm has several steps consisting of image preprocessing, thresholding probe and weighted fusing. Four different methods are first designed to transform the SLO image into feature response images by taking different combinations of matched filter, contrast enhancement and mathematical morphology operators. A thresholding probe algorithm is then applied on those response images to obtain four vessel maps. Weighted majority opinion is used to fuse these vessel maps and generate a final vessel map. The experimental results showed that the proposed hybrid algorithm could successfully segment the blood vessels on SLO images, by detecting the major and small vessels and suppressing the noises. The algorithm showed substantial potential in various clinical applications. The use of this method can be also extended to medical image registration based on blood vessel location. PMID:19163149

  3. Automatic Whistler Detector and Analyzer system: Implementation of the analyzer algorithm

    NASA Astrophysics Data System (ADS)

    Lichtenberger, JáNos; Ferencz, Csaba; Hamar, Daniel; Steinbach, Peter; Rodger, Craig J.; Clilverd, Mark A.; Collier, Andrew B.

    2010-12-01

    The full potential of whistlers for monitoring plasmaspheric electron density variations has not yet been realized. The primary reason is the vast human effort required for the analysis of whistler traces. Recently, the first part of a complete whistler analysis procedure was successfully automated, i.e., the automatic detection of whistler traces from the raw broadband VLF signal was achieved. This study describes a new algorithm developed to determine plasmaspheric electron density measurements from whistler traces, based on a Virtual (Whistler) Trace Transformation, using a 2-D fast Fourier transform transformation. This algorithm can be automated and can thus form the final step to complete an Automatic Whistler Detector and Analyzer (AWDA) system. In this second AWDA paper, the practical implementation of the Automatic Whistler Analyzer (AWA) algorithm is discussed and a feasible solution is presented. The practical implementation of the algorithm is able to track the variations of plasmasphere in quasi real time on a PC cluster with 100 CPU cores. The electron densities obtained by the AWA method can be used in investigations such as plasmasphere dynamics, ionosphere-plasmasphere coupling, or in space weather models.

  4. Statistical methods for convergence detection of multi-objective evolutionary algorithms.

    PubMed

    Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J

    2009-01-01

    In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.

  5. Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation

    NASA Astrophysics Data System (ADS)

    Arotaritei, D.; Rotariu, C.

    2015-09-01

    In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).

  6. A graph-based system for network-vulnerability analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, L.P.; Phillips, C.

    1998-06-01

    This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low effort cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks,more » broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.« less

  7. Harmonic regression based multi-temporal cloud filtering algorithm for Landsat 8

    NASA Astrophysics Data System (ADS)

    Joshi, P.

    2015-12-01

    Landsat data archive though rich is seen to have missing dates and periods owing to the weather irregularities and inconsistent coverage. The satellite images are further subject to cloud cover effects resulting in erroneous analysis and observations of ground features. In earlier studies the change detection algorithm using statistical control charts on harmonic residuals of multi-temporal Landsat 5 data have been shown to detect few prominent remnant clouds [Brooks, Evan B., et al, 2014]. So, in this work we build on this harmonic regression approach to detect and filter clouds using a multi-temporal series of Landsat 8 images. Firstly, we compute the harmonic coefficients using the fitting models on annual training data. This time series of residuals is further subjected to Shewhart X-bar control charts which signal the deviations of cloud points from the fitted multi-temporal fourier curve. For the process with standard deviation σ we found the second and third order harmonic regression with a x-bar chart control limit [Lσ] ranging between [0.5σ < Lσ < σ] as most efficient in detecting clouds. By implementing second order harmonic regression with successive x-bar chart control limits of L and 0.5 L on the NDVI, NDSI and haze optimized transformation (HOT), and utilizing the seasonal physical properties of these parameters, we have designed a novel multi-temporal algorithm for filtering clouds from Landsat 8 images. The method is applied to Virginia and Alabama in Landsat8 UTM zones 17 and 16 respectively. Our algorithm efficiently filters all types of cloud cover with an overall accuracy greater than 90%. As a result of the multi-temporal operation and the ability to recreate the multi-temporal database of images using only the coefficients of the fourier regression, our algorithm is largely storage and time efficient. The results show a good potential for this multi-temporal approach for cloud detection as a timely and targeted solution for the Landsat 8 research community, catering to the need for innovative processing solutions in the infant stage of the satellite.

  8. Monitoring Fires from Space: a case study in transitioning from research to applications

    NASA Astrophysics Data System (ADS)

    Justice, C. O.; Giglio, L.; Vadrevu, K. P.; Csiszar, I. A.; Schroeder, W.; Davies, D.

    2012-12-01

    This paper discusses the heritage and relationships between science and applications in the context of global satellite-based fire monitoring. The development of algorithms for satellite-based fire detection has been supported primarily by NASA for the polar orbiters with a global focus, and initially by NOAA and more recently by EUMETSAT for the geostationary satellites, with a regional focus. As the feasibility and importance of space-based fire monitoring was recognized, satellite missions were designed to include fire detection capabilities. As a result, the algorithms and accuracy of the detections have improved. Due to the role of fire in the Earth System and its relevance to society, at each step in the development of the sensing capability the research has made a transition into fire-related applications to such an extent that there is now broad use of these data worldwide. The origin of the polar-orbiting satellite fire detection capability was with the AVHRR sensor beginning in the early 1980s, but was transformed with the launch of the EOS MODIS instruments, which included sensor characteristics specifically for fire detection. NASA gave considerable emphasis on the accuracy assessment of the fire detection and the development of fire characterization and burned area products from MODIS. Collaboration between the MODIS Fire Team and the RSAC USFS, initiated in the context of the Montana wildfires of 2001, prompted the development of a Rapid Response System for fire data and eventually led to operational use of MODIS data by the USFS for strategic fire monitoring. Building on this success, the Fire Information for Resource Management Systems (FIRMS) project was funded by NASA Applications to further develop products and services for the fire information community. The FIRMS was developed as a web-based geospatial tool, offering a range of geospatial data services, including SMS text messaging and is now widely used. This system, developed in the research domain, has now been successfully moved to an operational home at the UN FAO, as the Global Fire Information Management System (GFIMS). With a view to operational data continuity, the Suomi-NPP/JPSS VIIRS system was also designed with a fire detection capability, and is providing promising results for fire monitoring both from the standard operational production system and experimental product enhancements. International coordination on fire observations and outreach has been successfully developed under the GOFC GOLD program.

  9. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    NASA Astrophysics Data System (ADS)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  10. An efficient parallel termination detection algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Ofmore » these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.« less

  11. Cloud Detection of Optical Satellite Images Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lee, Kuan-Yi; Lin, Chao-Hung

    2016-06-01

    Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.

  12. Individual Rocks Segmentation in Terrestrial Laser Scanning Point Cloud Using Iterative Dbscan Algorithm

    NASA Astrophysics Data System (ADS)

    Walicka, A.; Jóźków, G.; Borkowski, A.

    2018-05-01

    The fluvial transport is an important aspect of hydrological and geomorphologic studies. The knowledge about the movement parameters of different-size fractions is essential in many applications, such as the exploration of the watercourse changes, the calculation of the river bed parameters or the investigation of the frequency and the nature of the weather events. Traditional techniques used for the fluvial transport investigations do not provide any information about the long-term horizontal movement of the rocks. This information can be gained by means of terrestrial laser scanning (TLS). However, this is a complex issue consisting of several stages of data processing. In this study the methodology for individual rocks segmentation from TLS point cloud has been proposed, which is the first step for the semi-automatic algorithm for movement detection of individual rocks. The proposed algorithm is executed in two steps. Firstly, the point cloud is classified as rocks or background using only geometrical information. Secondly, the DBSCAN algorithm is executed iteratively on points classified as rocks until only one stone is detected in each segment. The number of rocks in each segment is determined using principal component analysis (PCA) and simple derivative method for peak detection. As a result, several segments that correspond to individual rocks are formed. Numerical tests were executed on two test samples. The results of the semi-automatic segmentation were compared to results acquired by manual segmentation. The proposed methodology enabled to successfully segment 76 % and 72 % of rocks in the test sample 1 and test sample 2, respectively.

  13. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  14. How Small Can Impact Craters Be Detected at Large Scale by Automated Algorithms?

    NASA Astrophysics Data System (ADS)

    Bandeira, L.; Machado, M.; Pina, P.; Marques, J. S.

    2013-12-01

    The last decade has seen a widespread publication of crater detection algorithms (CDA) with increasing detection performances. The adaptive nature of some of the algorithms [1] has permitting their use in the construction or update of global catalogues for Mars and the Moon. Nevertheless, the smallest craters detected in these situations by CDA have 10 pixels in diameter (or about 2 km in MOC-WA images) [2] or can go down to 16 pixels or 200 m in HRSC imagery [3]. The availability of Martian images with metric (HRSC and CTX) and centimetric (HiRISE) resolutions is permitting to unveil craters not perceived before, thus automated approaches seem a natural way of detecting the myriad of these structures. In this study we present the efforts, based on our previous algorithms [2-3] and new training strategies, to push the automated detection of craters to a dimensional threshold as close as possible to the detail that can be perceived on the images, something that has not been addressed yet in a systematic way. The approach is based on the selection of candidate regions of the images (portions that contain crescent highlight and shadow shapes indicating a possible presence of a crater) using mathematical morphology operators (connected operators of different sizes) and on the extraction of texture features (Haar-like) and classification by Adaboost, into crater and non-crater. This is a supervised approach, meaning that a training phase, in which manually labelled samples are provided, is necessary so the classifier can learn what crater and non-crater structures are. The algorithm is intensively tested in Martian HiRISE images, from different locations on the planet, in order to cover the largest surface types from the geological point view (different ages and crater densities) and also from the imaging or textural perspective (different degrees of smoothness/roughness). The quality of the detections obtained is clearly dependent on the dimension of the craters intended to be detected: the lower this limit is, the higher the false detection rates are. A detailed evaluation is performed with breakdown results by crater dimension and image or surface type, permitting to realize that automated detections in large crater datasets in HiRISE imagery datasets with 25cm/pixel resolution can be successfully done (high correct and low false positive detections) until a crater dimension of about 8-10 m or 32-40 pixels. [1] Martins L, Pina P. Marques JS, Silveira M, 2009, Crater detection by a boosting approach. IEEE Geoscience and Remote Sensing Letters 6: 127-131. [2] Salamuniccar G, Loncaric S, Pina P. Bandeira L., Saraiva J, 2011, MA130301GT catalogue of Martian impact craters and advanced evaluation of crater detection algorithms using diverse topography and image datasets. Planetary and Space Science 59: 111-131. [3] Bandeira L, Ding W, Stepinski T, 2012, Detection of sub-kilometer craters in high resolution planetary images using shape and texture features. Advances in Space Research 49: 64-74.

  15. Visualization of Pulsar Search Data

    NASA Astrophysics Data System (ADS)

    Foster, R. S.; Wolszczan, A.

    1993-05-01

    The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.

  16. Radar Detection of Marine Mammals

    DTIC Science & Technology

    2011-09-30

    BFT-BPT algorithm for use with our radar data. This track - before - detect algorithm had been effective in enhancing small but persistent signatures in...will be possible with the detect before track algorithm. 4 We next evaluated the track before detect algorithm, the BFT-BPT, on the CEDAR data

  17. A system for automatic aorta sections measurements on chest CT

    NASA Astrophysics Data System (ADS)

    Pfeffer, Yitzchak; Mayer, Arnaldo; Zholkover, Adi; Konen, Eli

    2016-03-01

    A new method is proposed for caliber measurement of the ascending aorta (AA) and descending aorta (DA). A key component of the method is the automatic detection of the carina, as an anatomical landmark around which an axial volume of interest (VOI) can be defined to observe the aortic caliber. For each slice in the VOI, a linear profile line connecting the AA with the DA is found by pattern matching on the underlying intensity profile. Next, the aortic center position is found using Hough transform on the best linear segment candidate. Finally, region growing around the center provides an accurate segmentation and caliber measurement. We evaluated the algorithm on 113 sequential chest CT scans, slice thickness of 0.75 - 3.75mm, 90 with contrast agent injected. The algorithm success rates were computed as the percentage of scans in which the center of the AA was found. Automated measurements of AA caliber were compared with independent measurements of two experienced chest radiologists, comparing the absolute difference between the two radiologists with the absolute difference between the algorithm and each of the radiologists. The measurement stability was demonstrated by computing the STD of the absolute difference between the radiologists, and between the algorithm and the radiologists. Results: Success rates of 93% and 74% were achieved, for contrast injected cases and non-contrast cases, respectively. These results indicate that the algorithm can be robust in large variability of image quality, such as the cases in a realworld clinical setting. The average absolute difference between the algorithm and the radiologists was 1.85mm, lower than the average absolute difference between the radiologists, which was 2.1mm. The STD of the absolute difference between the algorithm and the radiologists was 1.5mm vs 1.6mm between the two radiologists. These results demonstrate the clinical relevance of the algorithm measurements.

  18. Q-Learning-Based Adjustable Fixed-Phase Quantum Grover Search Algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Shi, Wensha; Wang, Yijun; Hu, Jiankun

    2017-02-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one.

  19. Improved Resolution and Reduced Clutter in Ultra-Wideband Microwave Imaging Using Cross-Correlated Back Projection: Experimental and Numerical Results

    PubMed Central

    Jacobsen, S.; Birkelund, Y.

    2010-01-01

    Microwave breast cancer detection is based on the dielectric contrast between healthy and malignant tissue. This radar-based imaging method involves illumination of the breast with an ultra-wideband pulse. Detection of tumors within the breast is achieved by some selected focusing technique. Image formation algorithms are tailored to enhance tumor responses and reduce early-time and late-time clutter associated with skin reflections and heterogeneity of breast tissue. In this contribution, we evaluate the performance of the so-called cross-correlated back projection imaging scheme by using a scanning system in phantom experiments. Supplementary numerical modeling based on commercial software is also presented. The phantom is synthetically scanned with a broadband elliptical antenna in a mono-static configuration. The respective signals are pre-processed by a data-adaptive RLS algorithm in order to remove artifacts caused by antenna reverberations and signal clutter. Successful detection of a 7 mm diameter cylindrical tumor immersed in a low permittivity medium was achieved in all cases. Selecting the widely used delay-and-sum (DAS) beamforming algorithm as a benchmark, we show that correlation based imaging methods improve the signal-to-clutter ratio by at least 10 dB and improves spatial resolution through a reduction of the imaged peak full-width half maximum (FWHM) of about 40–50%. PMID:21331362

  20. Improved resolution and reduced clutter in ultra-wideband microwave imaging using cross-correlated back projection: experimental and numerical results.

    PubMed

    Jacobsen, S; Birkelund, Y

    2010-01-01

    Microwave breast cancer detection is based on the dielectric contrast between healthy and malignant tissue. This radar-based imaging method involves illumination of the breast with an ultra-wideband pulse. Detection of tumors within the breast is achieved by some selected focusing technique. Image formation algorithms are tailored to enhance tumor responses and reduce early-time and late-time clutter associated with skin reflections and heterogeneity of breast tissue. In this contribution, we evaluate the performance of the so-called cross-correlated back projection imaging scheme by using a scanning system in phantom experiments. Supplementary numerical modeling based on commercial software is also presented. The phantom is synthetically scanned with a broadband elliptical antenna in a mono-static configuration. The respective signals are pre-processed by a data-adaptive RLS algorithm in order to remove artifacts caused by antenna reverberations and signal clutter. Successful detection of a 7 mm diameter cylindrical tumor immersed in a low permittivity medium was achieved in all cases. Selecting the widely used delay-and-sum (DAS) beamforming algorithm as a benchmark, we show that correlation based imaging methods improve the signal-to-clutter ratio by at least 10 dB and improves spatial resolution through a reduction of the imaged peak full-width half maximum (FWHM) of about 40-50%.

  1. Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data

    NASA Astrophysics Data System (ADS)

    Chierici, F.; Embriaco, D.; Morucci, S.

    2017-12-01

    Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.

  2. A Novel Zero Velocity Interval Detection Algorithm for Self-Contained Pedestrian Navigation System with Inertial Sensors

    PubMed Central

    Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan

    2016-01-01

    Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266

  3. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    PubMed

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  4. Estimating the chance of success in IVF treatment using a ranking algorithm.

    PubMed

    Güvenir, H Altay; Misirli, Gizem; Dilbaz, Serdar; Ozdegirmenci, Ozlem; Demir, Berfu; Dilbaz, Berna

    2015-09-01

    In medicine, estimating the chance of success for treatment is important in deciding whether to begin the treatment or not. This paper focuses on the domain of in vitro fertilization (IVF), where estimating the outcome of a treatment is very crucial in the decision to proceed with treatment for both the clinicians and the infertile couples. IVF treatment is a stressful and costly process. It is very stressful for couples who want to have a baby. If an initial evaluation indicates a low pregnancy rate, decision of the couple may change not to start the IVF treatment. The aim of this study is twofold, firstly, to develop a technique that can be used to estimate the chance of success for a couple who wants to have a baby and secondly, to determine the attributes and their particular values affecting the outcome in IVF treatment. We propose a new technique, called success estimation using a ranking algorithm (SERA), for estimating the success of a treatment using a ranking-based algorithm. The particular ranking algorithm used here is RIMARC. The performance of the new algorithm is compared with two well-known algorithms that assign class probabilities to query instances. The algorithms used in the comparison are Naïve Bayes Classifier and Random Forest. The comparison is done in terms of area under the ROC curve, accuracy and execution time, using tenfold stratified cross-validation. The results indicate that the proposed SERA algorithm has a potential to be used successfully to estimate the probability of success in medical treatment.

  5. Redundant and fault-tolerant algorithms for real-time measurement and control systems for weapon equipment.

    PubMed

    Li, Dan; Hu, Xiaoguang

    2017-03-01

    Because of the high availability requirements from weapon equipment, an in-depth study has been conducted on the real-time fault-tolerance of the widely applied Compact PCI (CPCI) bus measurement and control system. A redundancy design method that uses heartbeat detection to connect the primary and alternate devices has been developed. To address the low successful execution rate and relatively large waste of time slices in the primary version of the task software, an improved algorithm for real-time fault-tolerant scheduling is proposed based on the Basic Checking available time Elimination idle time (BCE) algorithm, applying a single-neuron self-adaptive proportion sum differential (PSD) controller. The experimental validation results indicate that this system has excellent redundancy and fault-tolerance, and the newly developed method can effectively improve the system availability. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Methods for computational disease surveillance in infection prevention and control: Statistical process control versus Twitter's anomaly and breakout detection algorithms.

    PubMed

    Wiemken, Timothy L; Furmanek, Stephen P; Mattingly, William A; Wright, Marc-Oliver; Persaud, Annuradha K; Guinn, Brian E; Carrico, Ruth M; Arnold, Forest W; Ramirez, Julio A

    2018-02-01

    Although not all health care-associated infections (HAIs) are preventable, reducing HAIs through targeted intervention is key to a successful infection prevention program. To identify areas in need of targeted intervention, robust statistical methods must be used when analyzing surveillance data. The objective of this study was to compare and contrast statistical process control (SPC) charts with Twitter's anomaly and breakout detection algorithms. SPC and anomaly/breakout detection (ABD) charts were created for vancomycin-resistant Enterococcus, Acinetobacter baumannii, catheter-associated urinary tract infection, and central line-associated bloodstream infection data. Both SPC and ABD charts detected similar data points as anomalous/out of control on most charts. The vancomycin-resistant Enterococcus ABD chart detected an extra anomalous point that appeared to be higher than the same time period in prior years. Using a small subset of the central line-associated bloodstream infection data, the ABD chart was able to detect anomalies where the SPC chart was not. SPC charts and ABD charts both performed well, although ABD charts appeared to work better in the context of seasonal variation and autocorrelation. Because they account for common statistical issues in HAI data, ABD charts may be useful for practitioners for analysis of HAI surveillance data. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  7. Pulse retrieval algorithm for interferometric frequency-resolved optical gating based on differential evolution.

    PubMed

    Hyyti, Janne; Escoto, Esmerando; Steinmeyer, Günter

    2017-10-01

    A novel algorithm for the ultrashort laser pulse characterization method of interferometric frequency-resolved optical gating (iFROG) is presented. Based on a genetic method, namely, differential evolution, the algorithm can exploit all available information of an iFROG measurement to retrieve the complex electric field of a pulse. The retrieval is subjected to a series of numerical tests to prove the robustness of the algorithm against experimental artifacts and noise. These tests show that the integrated error-correction mechanisms of the iFROG method can be successfully used to remove the effect from timing errors and spectrally varying efficiency in the detection. Moreover, the accuracy and noise resilience of the new algorithm are shown to outperform retrieval based on the generalized projections algorithm, which is widely used as the standard method in FROG retrieval. The differential evolution algorithm is further validated with experimental data, measured with unamplified three-cycle pulses from a mode-locked Ti:sapphire laser. Additionally introducing group delay dispersion in the beam path, the retrieval results show excellent agreement with independent measurements with a commercial pulse measurement device based on spectral phase interferometry for direct electric-field retrieval. Further experimental tests with strongly attenuated pulses indicate resilience of differential-evolution-based retrieval against massive measurement noise.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmagarmid, A.K.

    The availability of distributed data bases is directly affected by the timely detection and resolution of deadlocks. Consequently, mechanisms are needed to make deadlock detection algorithms resilient to failures. Presented first is a centralized algorithm that allows transactions to have multiple requests outstanding. Next, a new distributed deadlock detection algorithm (DDDA) is presented, using a global detector (GD) to detect global deadlocks and local detectors (LDs) to detect local deadlocks. This algorithm essentially identifies transaction-resource interactions that m cause global (multisite) deadlocks. Third, a deadlock detection algorithm utilizing a transaction-wait-for (TWF) graph is presented. It is a fully disjoint algorithmmore » that allows multiple outstanding requests. The proposed algorithm can achieve improved overall performance by using multiple disjoint controllers coupled with the two-phase property while maintaining the simplicity of centralized schemes. Fourth, an algorithm that combines deadlock detection and avoidance is given. This algorithm uses concurrent transaction controllers and resource coordinators to achieve maximum distribution. The language of CSP is used to describe this algorithm. Finally, two efficient deadlock resolution protocols are given along with some guidelines to be used in choosing a transaction for abortion.« less

  9. Development of compact slip detection sensor using dielectric elastomer

    NASA Astrophysics Data System (ADS)

    Choi, Jae-young; Hwang, Do-Yeon; Kim, Baek-chul; Moon, Hyungpil; Choi, Hyouk Ryeol; Koo, Ja Choon

    2015-04-01

    In this paper, we developed a resistance tactile sensor that can detect a slip on the surface of sensor structure. The presented sensor device has fingerprint-like structures that are similar with the role of the humans finger print. The resistance slip sensor that the novel developed uses acrylo-nitrile butadiene rubber (NBR) as a dielectric substrate and graphene as an electrode material. We can measure the slip as the structure of sensor makes a deformation and it changes the resistance through forming a new conductive route. To manufacture our sensor, we developed a new imprint process. By using this process, we can produce sensor with micro unit structure. To verify effectiveness of the proposed slip detection, experiment using prototype of resistance slip sensor is conducted with an algorithm to detect slip and slip is successfully detected. We will discuss the slip detection properties.

  10. Anomaly Detection in Test Equipment via Sliding Mode Observers

    NASA Technical Reports Server (NTRS)

    Solano, Wanda M.; Drakunov, Sergey V.

    2012-01-01

    Nonlinear observers were originally developed based on the ideas of variable structure control, and for the purpose of detecting disturbances in complex systems. In this anomaly detection application, these observers were designed for estimating the distributed state of fluid flow in a pipe described by a class of advection equations. The observer algorithm uses collected data in a piping system to estimate the distributed system state (pressure and velocity along a pipe containing liquid gas propellant flow) using only boundary measurements. These estimates are then used to further estimate and localize possible anomalies such as leaks or foreign objects, and instrumentation metering problems such as incorrect flow meter orifice plate size. The observer algorithm has the following parts: a mathematical model of the fluid flow, observer control algorithm, and an anomaly identification algorithm. The main functional operation of the algorithm is in creating the sliding mode in the observer system implemented as software. Once the sliding mode starts in the system, the equivalent value of the discontinuous function in sliding mode can be obtained by filtering out the high-frequency chattering component. In control theory, "observers" are dynamic algorithms for the online estimation of the current state of a dynamic system by measurements of an output of the system. Classical linear observers can provide optimal estimates of a system state in case of uncertainty modeled by white noise. For nonlinear cases, the theory of nonlinear observers has been developed and its success is mainly due to the sliding mode approach. Using the mathematical theory of variable structure systems with sliding modes, the observer algorithm is designed in such a way that it steers the output of the model to the output of the system obtained via a variety of sensors, in spite of possible mismatches between the assumed model and actual system. The unique properties of sliding mode control allow not only control of the model internal states to the states of the real-life system, but also identification of the disturbance or anomaly that may occur.

  11. Comparison of public peak detection algorithms for MALDI mass spectrometry data analysis.

    PubMed

    Yang, Chao; He, Zengyou; Yu, Weichuan

    2009-01-06

    In mass spectrometry (MS) based proteomic data analysis, peak detection is an essential step for subsequent analysis. Recently, there has been significant progress in the development of various peak detection algorithms. However, neither a comprehensive survey nor an experimental comparison of these algorithms is yet available. The main objective of this paper is to provide such a survey and to compare the performance of single spectrum based peak detection methods. In general, we can decompose a peak detection procedure into three consequent parts: smoothing, baseline correction and peak finding. We first categorize existing peak detection algorithms according to the techniques used in different phases. Such a categorization reveals the differences and similarities among existing peak detection algorithms. Then, we choose five typical peak detection algorithms to conduct a comprehensive experimental study using both simulation data and real MALDI MS data. The results of comparison show that the continuous wavelet-based algorithm provides the best average performance.

  12. Sample-Based Motion Planning in High-Dimensional and Differentially-Constrained Systems

    DTIC Science & Technology

    2010-02-01

    Reachable Set . . . 88 6-1 LittleDog Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6-2 Dog bounding up stairs ...planning algorithm implemented on LittleDog, a quadruped robot . The motion planning algorithm successfully planned bounding trajectories over extremely...a motion planning algorithm implemented on LittleDog, a quadruped robot . The motion planning algorithm successfully planned bounding trajectories

  13. Automated detection of the retinal from OCT spectral domain images of healthy eyes

    NASA Astrophysics Data System (ADS)

    Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello

    2015-06-01

    Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retinal. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.

  14. Automated detection of retinal layers from OCT spectral-domain images of healthy eyes

    NASA Astrophysics Data System (ADS)

    Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello

    2015-12-01

    Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retina. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral-domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.

  15. Superpixel guided active contour segmentation of retinal layers in OCT volumes

    NASA Astrophysics Data System (ADS)

    Bai, Fangliang; Gibson, Stuart J.; Marques, Manuel J.; Podoleanu, Adrian

    2018-03-01

    Retinal OCT image segmentation is a precursor to subsequent medical diagnosis by a clinician or machine learning algorithm. In the last decade, many algorithms have been proposed to detect retinal layer boundaries and simplify the image representation. Inspired by the recent success of superpixel methods for pre-processing natural images, we present a novel framework for segmentation of retinal layers in OCT volume data. In our framework, the region of interest (e.g. the fovea) is located using an adaptive-curve method. The cell layer boundaries are then robustly detected firstly using 1D superpixels, applied to A-scans, and then fitting active contours in B-scan images. Thereafter the 3D cell layer surfaces are efficiently segmented from the volume data. The framework was tested on healthy eye data and we show that it is capable of segmenting up to 12 layers. The experimental results imply the effectiveness of proposed method and indicate its robustness to low image resolution and intrinsic speckle noise.

  16. Hyperspectral Imaging and SPA-LDA Quantitative Analysis for Detection of Colon Cancer Tissue

    NASA Astrophysics Data System (ADS)

    Yuan, X.; Zhang, D.; Wang, Ch.; Dai, B.; Zhao, M.; Li, B.

    2018-05-01

    Hyperspectral imaging (HSI) has been demonstrated to provide a rapid, precise, and noninvasive method for cancer detection. However, because HSI contains many data, quantitative analysis is often necessary to distill information useful for distinguishing cancerous from normal tissue. To demonstrate that HSI with our proposed algorithm can make this distinction, we built a Vis-NIR HSI setup and made many spectral images of colon tissues, and then used a successive projection algorithm (SPA) to analyze the hyperspectral image data of the tissues. This was used to build an identification model based on linear discrimination analysis (LDA) using the relative reflectance values of the effective wavelengths. Other tissues were used as a prediction set to verify the reliability of the identification model. The results suggest that Vis-NIR hyperspectral images, together with the spectroscopic classification method, provide a new approach for reliable and safe diagnosis of colon cancer and could lead to advances in cancer diagnosis generally.

  17. A comparative study of sensor fault diagnosis methods based on observer for ECAS system

    NASA Astrophysics Data System (ADS)

    Xu, Xing; Wang, Wei; Zou, Nannan; Chen, Long; Cui, Xiaoli

    2017-03-01

    The performance and practicality of electronically controlled air suspension (ECAS) system are highly dependent on the state information supplied by kinds of sensors, but faults of sensors occur frequently. Based on a non-linearized 3-DOF 1/4 vehicle model, different methods of fault detection and isolation (FDI) are used to diagnose the sensor faults for ECAS system. The considered approaches include an extended Kalman filter (EKF) with concise algorithm, a strong tracking filter (STF) with robust tracking ability, and the cubature Kalman filter (CKF) with numerical precision. We propose three filters of EKF, STF, and CKF to design a state observer of ECAS system under typical sensor faults and noise. Results show that three approaches can successfully detect and isolate faults respectively despite of the existence of environmental noise, FDI time delay and fault sensitivity of different algorithms are different, meanwhile, compared with EKF and STF, CKF method has best performing FDI of sensor faults for ECAS system.

  18. Seismic waveform classification using deep learning

    NASA Astrophysics Data System (ADS)

    Kong, Q.; Allen, R. M.

    2017-12-01

    MyShake is a global smartphone seismic network that harnesses the power of crowdsourcing. It has an Artificial Neural Network (ANN) algorithm running on the phone to distinguish earthquake motion from human activities recorded by the accelerometer on board. Once the ANN detects earthquake-like motion, it sends a 5-min chunk of acceleration data back to the server for further analysis. The time-series data collected contains both earthquake data and human activity data that the ANN confused. In this presentation, we will show the Convolutional Neural Network (CNN) we built under the umbrella of supervised learning to find out the earthquake waveform. The waveforms of the recorded motion could treat easily as images, and by taking the advantage of the power of CNN processing the images, we achieved very high successful rate to select the earthquake waveforms out. Since there are many non-earthquake waveforms than the earthquake waveforms, we also built an anomaly detection algorithm using the CNN. Both these two methods can be easily extended to other waveform classification problems.

  19. Multimodal Neurodiagnostic Tool for Exploration Missions

    NASA Technical Reports Server (NTRS)

    Lee, Yong Jin

    2015-01-01

    Linea Research Corporation has developed a neurodiagnostic tool that detects behavioral stress markers for astronauts on long-duration space missions. Lightweight and compact, the device is unobtrusive and requires minimal time and effort for the crew to use. The system provides a real-time functional imaging of cortical activity during normal activities. In Phase I of the project, Linea Research successfully monitored cortical activity using multiparameter sensor modules. Using electroencephalography (EEG) and functional near-infrared spectroscopy signals, the company obtained photoplethysmography and electrooculography signals to compute the heart rate and frequency of eye movement. The company also demonstrated the functionality of an algorithm that automatically classifies the varying degrees of cognitive loading based on physiological parameters. In Phase II, Linea Research developed the flight-capable neurodiagnostic device. Worn unobtrusively on the head, the device detects and classifies neurophysiological markers associated with decrements in behavior state and cognition. An automated algorithm identifies key decrements and provides meaningful and actionable feedback to the crew and ground-based medical staff.

  20. Detection and Correction of Step Discontinuities in Kepler Flux Time Series

    NASA Technical Reports Server (NTRS)

    Kolodziejczak, J. J.; Morris, R. L.

    2011-01-01

    PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].

  1. Artificial Intelligence in Medical Practice: The Question to the Answer?

    PubMed

    Miller, D Douglas; Brown, Eric W

    2018-02-01

    Computer science advances and ultra-fast computing speeds find artificial intelligence (AI) broadly benefitting modern society-forecasting weather, recognizing faces, detecting fraud, and deciphering genomics. AI's future role in medical practice remains an unanswered question. Machines (computers) learn to detect patterns not decipherable using biostatistics by processing massive datasets (big data) through layered mathematical models (algorithms). Correcting algorithm mistakes (training) adds to AI predictive model confidence. AI is being successfully applied for image analysis in radiology, pathology, and dermatology, with diagnostic speed exceeding, and accuracy paralleling, medical experts. While diagnostic confidence never reaches 100%, combining machines plus physicians reliably enhances system performance. Cognitive programs are impacting medical practice by applying natural language processing to read the rapidly expanding scientific literature and collate years of diverse electronic medical records. In this and other ways, AI may optimize the care trajectory of chronic disease patients, suggest precision therapies for complex illnesses, reduce medical errors, and improve subject enrollment into clinical trials. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Collision Detection for Underwater ROV Manipulator Systems

    PubMed Central

    Rossi, Matija; Dooly, Gerard; Toal, Daniel

    2018-01-01

    Work-class ROVs equipped with robotic manipulators are extensively used for subsea intervention operations. Manipulators are teleoperated by human pilots relying on visual feedback from the worksite. Operating in a remote environment, with limited pilot perception and poor visibility, manipulator collisions which may cause significant damage are likely to happen. This paper presents a real-time collision detection algorithm for marine robotic manipulation. The proposed collision detection mechanism is developed, integrated into a commercial ROV manipulator control system, and successfully evaluated in simulations and experimental setup using a real industry standard underwater manipulator. The presented collision sensing solution has a potential to be a useful pilot assisting tool that can reduce the task load, operational time, and costs of subsea inspection, repair, and maintenance operations. PMID:29642396

  3. Collision Detection for Underwater ROV Manipulator Systems.

    PubMed

    Sivčev, Satja; Rossi, Matija; Coleman, Joseph; Omerdić, Edin; Dooly, Gerard; Toal, Daniel

    2018-04-06

    Work-class ROVs equipped with robotic manipulators are extensively used for subsea intervention operations. Manipulators are teleoperated by human pilots relying on visual feedback from the worksite. Operating in a remote environment, with limited pilot perception and poor visibility, manipulator collisions which may cause significant damage are likely to happen. This paper presents a real-time collision detection algorithm for marine robotic manipulation. The proposed collision detection mechanism is developed, integrated into a commercial ROV manipulator control system, and successfully evaluated in simulations and experimental setup using a real industry standard underwater manipulator. The presented collision sensing solution has a potential to be a useful pilot assisting tool that can reduce the task load, operational time, and costs of subsea inspection, repair, and maintenance operations.

  4. On-loom, real-time, noncontact detection of fabric defects by ultrasonic imaging.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chien, H. T.

    1998-09-08

    A noncontact, on-loom ultrasonic inspection technique was developed for real-time 100% defect inspection of fabrics. A prototype was built and tested successfully on loom. The system is compact, rugged, low cost, requires minimal maintenance, is not sensitive to fabric color and vibration, and can easily be adapted to current loom configurations. Moreover, it can detect defects in both the pick and warp directions. The system is capable of determining the size, location, and orientation of each defect. To further improve the system, air-coupled transducers with higher efficiency and sensitivity need to be developed. Advanced detection algorithms also need to bemore » developed for better classification and categorization of defects in real-time.« less

  5. Recognition of three dimensional obstacles by an edge detection scheme. [for Mars roving vehicle using laser range finder

    NASA Technical Reports Server (NTRS)

    Reed, M. A.

    1974-01-01

    The need for an obstacle detection system on the Mars roving vehicle was assumed, and a practical scheme was investigated and simulated. The principal sensing device on this vehicle was taken to be a laser range finder. Both existing and original algorithms, ending with thresholding operations, were used to obtain the outlines of obstacles from the raw data of this laser scan. A theoretical analysis was carried out to show how proper value of threshold may be chosen. Computer simulations considered various mid-range boulders, for which the scheme was quite successful. The extension to other types of obstacles, such as craters, was considered. The special problems of bottom edge detection and scanning procedure are discussed.

  6. Efficient Geometric Probabilities of Multi-transiting Systems, Circumbinary Planets, and Exoplanet Mutual Events

    NASA Astrophysics Data System (ADS)

    Brakensiek, Joshua; Ragozzine, D.

    2012-10-01

    The transit method for discovering extra-solar planets relies on detecting regular diminutions of light from stars due to the shadows of planets passing in between the star and the observer. NASA's Kepler Mission has successfully discovered thousands of exoplanet candidates using this technique, including hundreds of stars with multiple transiting planets. In order to estimate the frequency of these valuable systems, our research concerns the efficient calculation of geometric probabilities for detecting multiple transiting extrasolar planets around the same parent star. In order to improve on previous studies that used numerical methods (e.g., Ragozzine & Holman 2010, Tremaine & Dong 2011), we have constructed an efficient, analytical algorithm which, given a collection of conjectured exoplanets orbiting a star, computes the probability that any particular group of exoplanets are transiting. The algorithm applies theorems of elementary differential geometry to compute the areas bounded by circular curves on the surface of a sphere (see Ragozzine & Holman 2010). The implemented algorithm is more accurate and orders of magnitude faster than previous algorithms, based on comparison with Monte Carlo simulations. Expanding this work, we have also developed semi-analytical methods for determining the frequency of exoplanet mutual events, i.e., the geometric probability two planets will transit each other (Planet-Planet Occultation) and the probability that this transit occurs simultaneously as they transit their star (Overlapping Double Transits; see Ragozzine & Holman 2010). The latter algorithm can also be applied to calculating the probability of observing transiting circumbinary planets (Doyle et al. 2011, Welsh et al. 2012). All of these algorithms have been coded in C and will be made publicly available. We will present and advertise these codes and illustrate their value for studying exoplanetary systems.

  7. Automated detection of open magnetic field regions in EUV images

    NASA Astrophysics Data System (ADS)

    Krista, Larisza Diana; Reinard, Alysha

    2016-05-01

    Open magnetic regions on the Sun are either long-lived (coronal holes) or transient (dimmings) in nature, but both appear as dark regions in EUV images. For this reason their detection can be done in a similar way. As coronal holes are often large and long-lived in comparison to dimmings, their detection is more straightforward. The Coronal Hole Automated Recognition and Monitoring (CHARM) algorithm detects coronal holes using EUV images and a magnetogram. The EUV images are used to identify dark regions, and the magnetogam allows us to determine if the dark region is unipolar - a characteristic of coronal holes. There is no temporal sensitivity in this process, since coronal hole lifetimes span days to months. Dimming regions, however, emerge and disappear within hours. Hence, the time and location of a dimming emergence need to be known to successfully identify them and distinguish them from regular coronal holes. Currently, the Coronal Dimming Tracker (CoDiT) algorithm is semi-automated - it requires the dimming emergence time and location as an input. With those inputs we can identify the dimming and track it through its lifetime. CoDIT has also been developed to allow the tracking of dimmings that split or merge - a typical feature of dimmings.The advantage of these particular algorithms is their ability to adapt to detecting different types of open field regions. For coronal hole detection, each full-disk solar image is processed individually to determine a threshold for the image, hence, we are not limited to a single pre-determined threshold. For dimming regions we also allow individual thresholds for each dimming, as they can differ substantially. This flexibility is necessary for a subjective analysis of the studied regions. These algorithms were developed with the goal to allow us better understand the processes that give rise to eruptive and non-eruptive open field regions. We aim to study how these regions evolve over time and what environmental factors influence their growth and decay over short and long time-periods (days to solar cycles).

  8. Auto detection and segmentation of physical activities during a Timed-Up-and-Go (TUG) task in healthy older adults using multiple inertial sensors.

    PubMed

    Nguyen, Hung P; Ayachi, Fouaz; Lavigne-Pelletier, Catherine; Blamoutier, Margaux; Rahimi, Fariborz; Boissy, Patrick; Jog, Mandar; Duval, Christian

    2015-04-11

    Recently, much attention has been given to the use of inertial sensors for remote monitoring of individuals with limited mobility. However, the focus has been mostly on the detection of symptoms, not specific activities. The objective of the present study was to develop an automated recognition and segmentation algorithm based on inertial sensor data to identify common gross motor patterns during activity of daily living. A modified Time-Up-And-Go (TUG) task was used since it is comprised of four common daily living activities; Standing, Walking, Turning, and Sitting, all performed in a continuous fashion resulting in six different segments during the task. Sixteen healthy older adults performed two trials of a 5 and 10 meter TUG task. They were outfitted with 17 inertial motion sensors covering each body segment. Data from the 10 meter TUG were used to identify pertinent sensors on the trunk, head, hip, knee, and thigh that provided suitable data for detecting and segmenting activities associated with the TUG. Raw data from sensors were detrended to remove sensor drift, normalized, and band pass filtered with optimal frequencies to reveal kinematic peaks that corresponded to different activities. Segmentation was accomplished by identifying the time stamps of the first minimum or maximum to the right and the left of these peaks. Segmentation time stamps were compared to results from two examiners visually segmenting the activities of the TUG. We were able to detect these activities in a TUG with 100% sensitivity and specificity (n = 192) during the 10 meter TUG. The rate of success was subsequently confirmed in the 5 meter TUG (n = 192) without altering the parameters of the algorithm. When applying the segmentation algorithms to the 10 meter TUG, we were able to parse 100% of the transition points (n = 224) between different segments that were as reliable and less variable than visual segmentation performed by two independent examiners. The present study lays the foundation for the development of a comprehensive algorithm to detect and segment naturalistic activities using inertial sensors, in hope of evaluating automatically motor performance within the detected tasks.

  9. Improved target detection algorithm using Fukunaga-Koontz transform and distance classifier correlation filter

    NASA Astrophysics Data System (ADS)

    Bal, A.; Alam, M. S.; Aslan, M. S.

    2006-05-01

    Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.

  10. Adaboost multi-view face detection based on YCgCr skin color model

    NASA Astrophysics Data System (ADS)

    Lan, Qi; Xu, Zhiyong

    2016-09-01

    Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.

  11. Automated contour detection in X-ray left ventricular angiograms using multiview active appearance models and dynamic programming.

    PubMed

    Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2006-09-01

    This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.

  12. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  13. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

  14. Detection of electrophysiology catheters in noisy fluoroscopy images.

    PubMed

    Franken, Erik; Rongen, Peter; van Almsick, Markus; ter Haar Romeny, Bart

    2006-01-01

    Cardiac catheter ablation is a minimally invasive medical procedure to treat patients with heart rhythm disorders. It is useful to know the positions of the catheters and electrodes during the intervention, e.g. for the automatization of cardiac mapping. Our goal is therefore to develop a robust image analysis method that can detect the catheters in X-ray fluoroscopy images. Our method uses steerable tensor voting in combination with a catheter-specific multi-step extraction algorithm. The evaluation on clinical fluoroscopy images shows that especially the extraction of the catheter tip is successful and that the use of tensor voting accounts for a large increase in performance.

  15. Prostate implant reconstruction from C-arm images with motion-compensated tomosynthesis

    PubMed Central

    Dehghan, Ehsan; Moradi, Mehdi; Wen, Xu; French, Danny; Lobo, Julio; Morris, W. James; Salcudean, Septimiu E.; Fichtinger, Gabor

    2011-01-01

    Purpose: Accurate localization of prostate implants from several C-arm images is necessary for ultrasound-fluoroscopy fusion and intraoperative dosimetry. The authors propose a computational motion compensation method for tomosynthesis-based reconstruction that enables 3D localization of prostate implants from C-arm images despite C-arm oscillation and sagging. Methods: Five C-arm images are captured by rotating the C-arm around its primary axis, while measuring its rotation angle using a protractor or the C-arm joint encoder. The C-arm images are processed to obtain binary seed-only images from which a volume of interest is reconstructed. The motion compensation algorithm, iteratively, compensates for 2D translational motion of the C-arm by maximizing the number of voxels that project on a seed projection in all of the images. This obviates the need for C-arm full pose tracking traditionally implemented using radio-opaque fiducials or external trackers. The proposed reconstruction method is tested in simulations, in a phantom study and on ten patient data sets. Results: In a phantom implanted with 136 dummy seeds, the seed detection rate was 100% with a localization error of 0.86 ± 0.44 mm (Mean ± STD) compared to CT. For patient data sets, a detection rate of 99.5% was achieved in approximately 1 min per patient. The reconstruction results for patient data sets were compared against an available matching-based reconstruction method and showed relative localization difference of 0.5 ± 0.4 mm. Conclusions: The motion compensation method can successfully compensate for large C-arm motion without using radio-opaque fiducial or external trackers. Considering the efficacy of the algorithm, its successful reconstruction rate and low computational burden, the algorithm is feasible for clinical use. PMID:21992346

  16. Vibration Based Sun Gear Damage Detection

    NASA Technical Reports Server (NTRS)

    Hood, Adrian; LaBerge, Kelsen; Lewicki, David; Pines, Darryll

    2013-01-01

    Seeded fault experiments were conducted on the planetary stage of an OH-58C helicopter transmission. Two vibration based methods are discussed that isolate the dynamics of the sun gear from that of the planet gears, bearings, input spiral bevel stage, and other components in and around the gearbox. Three damaged sun gears: two spalled and one cracked, serve as the focus of this current work. A non-sequential vibration separation algorithm was developed and the resulting signals analyzed. The second method uses only the time synchronously averaged data but takes advantage of the signal/source mapping required for vibration separation. Both algorithms were successful in identifying the spall damage. Sun gear damage was confirmed by the presence of sun mesh groups. The sun tooth crack condition was inconclusive.

  17. Detection and classification of Breast Cancer in Wavelet Sub-bands of Fractal Segmented Cancerous Zones.

    PubMed

    Shirazinodeh, Alireza; Noubari, Hossein Ahmadi; Rabbani, Hossein; Dehnavi, Alireza Mehri

    2015-01-01

    Recent studies on wavelet transform and fractal modeling applied on mammograms for the detection of cancerous tissues indicate that microcalcifications and masses can be utilized for the study of the morphology and diagnosis of cancerous cases. It is shown that the use of fractal modeling, as applied to a given image, can clearly discern cancerous zones from noncancerous areas. In this paper, for fractal modeling, the original image is first segmented into appropriate fractal boxes followed by identifying the fractal dimension of each windowed section using a computationally efficient two-dimensional box-counting algorithm. Furthermore, using appropriate wavelet sub-bands and image Reconstruction based on modified wavelet coefficients, it is shown that it is possible to arrive at enhanced features for detection of cancerous zones. In this paper, we have attempted to benefit from the advantages of both fractals and wavelets by introducing a new algorithm. By using a new algorithm named F1W2, the original image is first segmented into appropriate fractal boxes, and the fractal dimension of each windowed section is extracted. Following from that, by applying a maximum level threshold on fractal dimensions matrix, the best-segmented boxes are selected. In the next step, the segmented Cancerous zones which are candidates are then decomposed by utilizing standard orthogonal wavelet transform and db2 wavelet in three different resolution levels, and after nullifying wavelet coefficients of the image at the first scale and low frequency band of the third scale, the modified reconstructed image is successfully utilized for detection of breast cancer regions by applying an appropriate threshold. For detection of cancerous zones, our simulations indicate the accuracy of 90.9% for masses and 88.99% for microcalcifications detection results using the F1W2 method. For classification of detected mictocalcification into benign and malignant cases, eight features are identified and utilized in radial basis function neural network. Our simulation results indicate the accuracy of 92% classification using F1W2 method.

  18. Hybrid genetic algorithm with an adaptive penalty function for fitting multimodal experimental data: application to exchange-coupled non-Kramers binuclear iron active sites.

    PubMed

    Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I

    2011-09-26

    A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.

  19. Fast localization of optic disc and fovea in retinal images for eye disease screening

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Echegaray, S.; Pattichis, M.; Zamora, G.; Bauman, W.; Soliz, P.

    2011-03-01

    Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and "gold standard" is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.

  20. New algorithm for detecting smaller retinal blood vessels in fundus images

    NASA Astrophysics Data System (ADS)

    LeAnder, Robert; Bidari, Praveen I.; Mohammed, Tauseef A.; Das, Moumita; Umbaugh, Scott E.

    2010-03-01

    About 4.1 million Americans suffer from diabetic retinopathy. To help automatically diagnose various stages of the disease, a new blood-vessel-segmentation algorithm based on spatial high-pass filtering was developed to automatically segment blood vessels, including the smaller ones, with low noise. Methods: Image database: Forty, 584 x 565-pixel images were collected from the DRIVE image database. Preprocessing: Green-band extraction was used to obtain better contrast, which facilitated better visualization of retinal blood vessels. A spatial highpass filter of mask-size 11 was applied. A histogram stretch was performed to enhance contrast. A median filter was applied to mitigate noise. At this point, the gray-scale image was converted to a binary image using a binary thresholding operation. Then, a NOT operation was performed by gray-level value inversion between 0 and 255. Postprocessing: The resulting image was AND-ed with its corresponding ring mask to remove the outer-ring (lens-edge) artifact. At this point, the above algorithm steps had extracted most of the major and minor vessels, with some intersections and bifurcations missing. Vessel segments were reintegrated using the Hough transform. Results: After applying the Hough transform, both the average peak SNR and the RMS error improved by 10%. Pratt's Figure of Merit (PFM) was decreased by 6%. Those averages were better than [1] by 10-30%. Conclusions: The new algorithm successfully preserved the details of smaller blood vessels and should prove successful as a segmentation step for automatically identifying diseases that affect retinal blood vessels.

  1. Stochastic model search with binary outcomes for genome-wide association studies

    PubMed Central

    Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo

    2012-01-01

    Objective The spread of case–control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Materials and methods Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. Results BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. Discussion BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. Conclusion The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model. PMID:22534080

  2. Tsunami Detection by High Frequency Radar Beyond the Continental Shelf: II. Extension of Time Correlation Algorithm and Validation on Realistic Case Studies

    NASA Astrophysics Data System (ADS)

    Grilli, Stéphan T.; Guérin, Charles-Antoine; Shelby, Michael; Grilli, Annette R.; Moran, Patrick; Grosdidier, Samuel; Insua, Tania L.

    2017-08-01

    In past work, tsunami detection algorithms (TDAs) have been proposed, and successfully applied to offline tsunami detection, based on analyzing tsunami currents inverted from high-frequency (HF) radar Doppler spectra. With this method, however, the detection of small and short-lived tsunami currents in the most distant radar ranges is challenging due to conflicting requirements on the Doppler spectra integration time and resolution. To circumvent this issue, in Part I of this work, we proposed an alternative TDA, referred to as time correlation (TC) TDA, that does not require inverting currents, but instead detects changes in patterns of correlations of radar signal time series measured in pairs of cells located along the main directions of tsunami propagation (predicted by geometric optics theory); such correlations can be maximized when one signal is time-shifted by the pre-computed long wave propagation time. We initially validated the TC-TDA based on numerical simulations of idealized tsunamis in a simplified geometry. Here, we further develop, extend, and apply the TC algorithm to more realistic tsunami case studies. These are performed in the area West of Vancouver Island, BC, where Ocean Networks Canada recently deployed a HF radar (in Tofino, BC), to detect tsunamis from far- and near-field sources, up to a 110 km range. Two case studies are considered, both simulated using long wave models (1) a far-field seismic, and (2) a near-field landslide, tsunami. Pending the availability of radar data, a radar signal simulator is parameterized for the Tofino HF radar characteristics, in particular its signal-to-noise ratio with range, and combined with the simulated tsunami currents to produce realistic time series of backscattered radar signal from a dense grid of cells. Numerical experiments show that the arrival of a tsunami causes a clear change in radar signal correlation patterns, even at the most distant ranges beyond the continental shelf, thus making an early tsunami detection possible with the TC-TDA. Based on these results, we discuss how the new algorithm could be combined with standard methods proposed earlier, based on a Doppler analysis, to develop a new tsunami detection system based on HF radar data, that could increase warning time. This will be the object of future work, which will be based on actual, rather than simulated, radar data.

  3. Real-time image-processing algorithm for markerless tumour tracking using X-ray fluoroscopic imaging.

    PubMed

    Mori, S

    2014-05-01

    To ensure accuracy in respiratory-gating treatment, X-ray fluoroscopic imaging is used to detect tumour position in real time. Detection accuracy is strongly dependent on image quality, particularly positional differences between the patient and treatment couch. We developed a new algorithm to improve the quality of images obtained in X-ray fluoroscopic imaging and report the preliminary results. Two oblique X-ray fluoroscopic images were acquired using a dynamic flat panel detector (DFPD) for two patients with lung cancer. The weighting factor was applied to the DFPD image in respective columns, because most anatomical structures, as well as the treatment couch and port cover edge, were aligned in the superior-inferior direction when the patient lay on the treatment couch. The weighting factors for the respective columns were varied until the standard deviation of the pixel values within the image region was minimized. Once the weighting factors were calculated, the quality of the DFPD image was improved by applying the factors to multiframe images. Applying the image-processing algorithm produced substantial improvement in the quality of images, and the image contrast was increased. The treatment couch and irradiation port edge, which were not related to a patient's position, were removed. The average image-processing time was 1.1 ms, showing that this fast image processing can be applied to real-time tumour-tracking systems. These findings indicate that this image-processing algorithm improves the image quality in patients with lung cancer and successfully removes objects not related to the patient. Our image-processing algorithm might be useful in improving gated-treatment accuracy.

  4. Modified automatic R-peak detection algorithm for patients with epilepsy using a portable electrocardiogram recorder.

    PubMed

    Jeppesen, J; Beniczky, S; Fuglsang Frederiksen, A; Sidenius, P; Johansen, P

    2017-07-01

    Earlier studies have shown that short term heart rate variability (HRV) analysis of ECG seems promising for detection of epileptic seizures. A precise and accurate automatic R-peak detection algorithm is a necessity in a real-time, continuous measurement of HRV, in a portable ECG device. We used the portable CE marked ePatch® heart monitor to record the ECG of 14 patients, who were enrolled in the videoEEG long term monitoring unit for clinical workup of epilepsy. Recordings of the first 7 patients were used as training set of data for the R-peak detection algorithm and the recordings of the last 7 patients (467.6 recording hours) were used to test the performance of the algorithm. We aimed to modify an existing QRS-detection algorithm to a more precise R-peak detection algorithm to avoid the possible jitter Qand S-peaks can create in the tachogram, which causes error in short-term HRVanalysis. The proposed R-peak detection algorithm showed a high sensitivity (Se = 99.979%) and positive predictive value (P+ = 99.976%), which was comparable with a previously published QRS-detection algorithm for the ePatch® ECG device, when testing the same dataset. The novel R-peak detection algorithm designed to avoid jitter has very high sensitivity and specificity and thus is a suitable tool for a robust, fast, real-time HRV-analysis in patients with epilepsy, creating the possibility for real-time seizure detection for these patients.

  5. Robust online tracking via adaptive samples selection with saliency detection

    NASA Astrophysics Data System (ADS)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  6. A near-infrared fluorescence-based surgical navigation system imaging software for sentinel lymph node detection

    NASA Astrophysics Data System (ADS)

    Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie

    2014-02-01

    Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.

  7. Global Optimality of the Successive Maxbet Algorithm.

    ERIC Educational Resources Information Center

    Hanafi, Mohamed; ten Berge, Jos M. F.

    2003-01-01

    It is known that the Maxbet algorithm, which is an alternative to the method of generalized canonical correlation analysis and Procrustes analysis, may converge to local maxima. Discusses an eigenvalue criterion that is sufficient, but not necessary, for global optimality of the successive Maxbet algorithm. (SLD)

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Guanghua, E-mail: yan@ufl.edu; Li, Jonathan; Huang, Yin

    Purpose: To propose a simple model to explain the origin of ghost markers in marker-based optical tracking systems (OTS) and to develop retrospective strategies to detect and eliminate ghost markers. Methods: In marker-based OTS, ghost markers are virtual markers created due to the cross-talk between the two camera sensors, which can lead to system execution failure or inaccuracy in patient tracking. As a result, the users have to limit the number of markers and avoid certain marker configurations to reduce the chances of ghost markers. In this work, the authors propose retrospective strategies to detect and eliminate ghost markers. Themore » two camera sensors were treated as mathematical points in space. The authors identified the coplanar within limit (CWL) condition as the necessary condition for ghost marker occurrence. A simple ghost marker detection method was proposed based on the model. Ghost marker elimination was achieved through pattern matching: a ghost marker-free reference set was matched with the optical marker set observed by the OTS; unmatched optical markers were eliminated as either ghost markers or misplaced markers. The pattern matching problem was formulated as a constraint satisfaction problem (using pairwise distances as constraints) and solved with an iterative backtracking algorithm. Wildcard markers were introduced to address missing or misplaced markers. An experiment was designed to measure the sensor positions and the limit for the CWL condition. The ghost marker detection and elimination algorithms were verified with samples collected from a five-marker jig and a nine-marker anthropomorphic phantom, rotated with the treatment couch from −60° to +60°. The accuracy of the pattern matching algorithm was further validated with marker patterns from 40 patients who underwent stereotactic body radiotherapy (SBRT). For this purpose, a synthetic optical marker pattern was created for each patient by introducing ghost markers, marker position uncertainties, and marker displacement. Results: The sensor positions and the limit for the CWL condition were measured with excellent reproducibility (standard deviation ≤ 0.39 mm). The ghost marker detection algorithm had perfect detection accuracy for both the jig (1544 samples) and the anthropomorphic phantom (2045 samples). Pattern matching was successful for all samples from both phantoms as well as the 40 patient marker patterns. Conclusions: The authors proposed a simple model to explain the origin of ghost markers and identified the CWL condition as the necessary condition for ghost marker occurrence. The retrospective ghost marker detection and elimination algorithms guarantee complete ghost marker elimination while providing the users with maximum flexibility in selecting the number of markers and their configuration to meet their clinic needs.« less

  9. Anomaly Detection and Life Pattern Estimation for the Elderly Based on Categorization of Accumulated Data

    NASA Astrophysics Data System (ADS)

    Mori, Taketoshi; Ishino, Takahito; Noguchi, Hiroshi; Shimosaka, Masamichi; Sato, Tomomasa

    2011-06-01

    We propose a life pattern estimation method and an anomaly detection method for elderly people living alone. In our observation system for such people, we deploy some pyroelectric sensors into the house and measure the person's activities all the time in order to grasp the person's life pattern. The data are transferred successively to the operation center and displayed to the nurses in the center in a precise way. Then, the nurses decide whether the data is the anomaly or not. In the system, the people whose features in their life resemble each other are categorized as the same group. Anomalies occurred in the past are shared in the group and utilized in the anomaly detection algorithm. This algorithm is based on "anomaly score." The "anomaly score" is figured out by utilizing the activeness of the person. This activeness is approximately proportional to the frequency of the sensor response in a minute. The "anomaly score" is calculated from the difference between the activeness in the present and the past one averaged in the long term. Thus, the score is positive if the activeness in the present is higher than the average in the past, and the score is negative if the value in the present is lower than the average. If the score exceeds a certain threshold, it means that an anomaly event occurs. Moreover, we developed an activity estimation algorithm. This algorithm estimates the residents' basic activities such as uprising, outing, and so on. The estimation is shown to the nurses with the "anomaly score" of the residents. The nurses can understand the residents' health conditions by combining these two information.

  10. Detecting trace components in liquid chromatography/mass spectrometry data sets with two-dimensional wavelets

    NASA Astrophysics Data System (ADS)

    Compton, Duane C.; Snapp, Robert R.

    2007-09-01

    TWiGS (two-dimensional wavelet transform with generalized cross validation and soft thresholding) is a novel algorithm for denoising liquid chromatography-mass spectrometry (LC-MS) data for use in "shot-gun" proteomics. Proteomics, the study of all proteins in an organism, is an emerging field that has already proven successful for drug and disease discovery in humans. There are a number of constraints that limit the effectiveness of liquid chromatography-mass spectrometry (LC-MS) for shot-gun proteomics, where the chemical signals are typically weak, and data sets are computationally large. Most algorithms suffer greatly from a researcher driven bias, making the results irreproducible and unusable by other laboratories. We thus introduce a new algorithm, TWiGS, that removes electrical (additive white) and chemical noise from LC-MS data sets. TWiGS is developed to be a true two-dimensional algorithm, which operates in the time-frequency domain, and minimizes the amount of researcher bias. It is based on the traditional discrete wavelet transform (DWT), which allows for fast and reproducible analysis. The separable two-dimensional DWT decomposition is paired with generalized cross validation and soft thresholding. The Haar, Coiflet-6, Daubechie-4 and the number of decomposition levels are determined based on observed experimental results. Using a synthetic LC-MS data model, TWiGS accurately retains key characteristics of the peaks in both the time and m/z domain, and can detect peaks from noise of the same intensity. TWiGS is applied to angiotensin I and II samples run on a LC-ESI-TOF-MS (liquid-chromatography-electrospray-ionization) to demonstrate its utility for the detection of low-lying peaks obscured by noise.

  11. Low-complexity R-peak detection in ECG signals: a preliminary step towards ambulatory fetal monitoring.

    PubMed

    Rooijakkers, Michiel; Rabotti, Chiara; Bennebroek, Martijn; van Meerbergen, Jef; Mischi, Massimo

    2011-01-01

    Non-invasive fetal health monitoring during pregnancy has become increasingly important. Recent advances in signal processing technology have enabled fetal monitoring during pregnancy, using abdominal ECG recordings. Ubiquitous ambulatory monitoring for continuous fetal health measurement is however still unfeasible due to the computational complexity of noise robust solutions. In this paper an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, while increasing the R-peak detection quality compared to existing R-peak detection schemes. Validation of the algorithm is performed on two manually annotated datasets, the MIT/BIH Arrhythmia database and an in-house abdominal database. Both R-peak detection quality and computational complexity are compared to state-of-the-art algorithms as described in the literature. With a detection error rate of 0.22% and 0.12% on the MIT/BIH Arrhythmia and in-house databases, respectively, the quality of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.

  12. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-01-01

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system. PMID:28481320

  13. Biological engineering applications of feedforward neural networks designed and parameterized by genetic algorithms.

    PubMed

    Ferentinos, Konstantinos P

    2005-09-01

    Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.

  14. Parallel optimization of signal detection in active magnetospheric signal injection experiments

    NASA Astrophysics Data System (ADS)

    Gowanlock, Michael; Li, Justin D.; Rude, Cody M.; Pankratius, Victor

    2018-05-01

    Signal detection and extraction requires substantial manual parameter tuning at different stages in the processing pipeline. Time-series data depends on domain-specific signal properties, necessitating unique parameter selection for a given problem. The large potential search space makes this parameter selection process time-consuming and subject to variability. We introduce a technique to search and prune such parameter search spaces in parallel and select parameters for time series filters using breadth- and depth-first search strategies to increase the likelihood of detecting signals of interest in the field of magnetospheric physics. We focus on studying geomagnetic activity in the extremely and very low frequency ranges (ELF/VLF) using ELF/VLF transmissions from Siple Station, Antarctica, received at Québec, Canada. Our technique successfully detects amplified transmissions and achieves substantial speedup performance gains as compared to an exhaustive parameter search. We present examples where our algorithmic approach reduces the search from hundreds of seconds down to less than 1 s, with a ranked signal detection in the top 99th percentile, thus making it valuable for real-time monitoring. We also present empirical performance models quantifying the trade-off between the quality of signal recovered and the algorithm response time required for signal extraction. In the future, improved signal extraction in scenarios like the Siple experiment will enable better real-time diagnostics of conditions of the Earth's magnetosphere for monitoring space weather activity.

  15. Ship heading and velocity analysis by wake detection in SAR images

    NASA Astrophysics Data System (ADS)

    Graziano, Maria Daniela; D'Errico, Marco; Rufino, Giancarlo

    2016-11-01

    With the aim of ship-route estimation, a wake detection method is developed and applied to COSMO/SkyMed and TerraSAR-X Stripmap SAR images over the Gulf of Naples, Italy. In order to mitigate the intrinsic limitations of the threshold logic, the algorithm identifies the wake features according to the hydrodynamic theory. A post-detection validation phase is performed to classify the features as real wake structures by means of merit indexes defined in the intensity domain. After wake reconstruction, ship heading is evaluated on the basis of turbulent wake direction and ship velocity is estimated by both techniques of azimuth shift and Kelvin pattern wavelength. The method is tested over 34 ship wakes identified by visual inspection in both HH and VV images at different incidence angles. For all wakes, no missed detections are reported and at least the turbulent and one narrow-V wakes are correctly identified, with ship heading successfully estimated. Also, the azimuth shift method is applied to estimate velocity for the 10 ships having route with sufficient angular separation from the satellite ground track. In one case ship velocity is successfully estimated with both methods, showing agreement within 14%.

  16. PCA method for automated detection of mispronounced words

    NASA Astrophysics Data System (ADS)

    Ge, Zhenhao; Sharma, Sudhendu R.; Smith, Mark J. T.

    2011-06-01

    This paper presents a method for detecting mispronunciations with the aim of improving Computer Assisted Language Learning (CALL) tools used by foreign language learners. The algorithm is based on Principle Component Analysis (PCA). It is hierarchical with each successive step refining the estimate to classify the test word as being either mispronounced or correct. Preprocessing before detection, like normalization and time-scale modification, is implemented to guarantee uniformity of the feature vectors input to the detection system. The performance using various features including spectrograms and Mel-Frequency Cepstral Coefficients (MFCCs) are compared and evaluated. Best results were obtained using MFCCs, achieving up to 99% accuracy in word verification and 93% in native/non-native classification. Compared with Hidden Markov Models (HMMs) which are used pervasively in recognition application, this particular approach is computational efficient and effective when training data is limited.

  17. NASA airborne radar wind shear detection algorithm and the detection of wet microbursts in the vicinity of Orlando, Florida

    NASA Technical Reports Server (NTRS)

    Britt, Charles L.; Bracalente, Emedio M.

    1992-01-01

    The algorithms used in the NASA experimental wind shear radar system for detection, characterization, and determination of windshear hazard are discussed. The performance of the algorithms in the detection of wet microbursts near Orlando is presented. Various suggested algorithms that are currently being evaluated using the flight test results from Denver and Orlando are reviewed.

  18. Identifying relevant hyperspectral bands using Boruta: a temporal analysis of water hyacinth biocontrol

    NASA Astrophysics Data System (ADS)

    Agjee, Na'eem Hoosen; Ismail, Riyad; Mutanga, Onisimo

    2016-10-01

    Water hyacinth plants (Eichhornia crassipes) are threatening freshwater ecosystems throughout Africa. The Neochetina spp. weevils are seen as an effective solution that can combat the proliferation of the invasive alien plant. We aimed to determine if multitemporal hyperspectral data could be utilized to detect the efficacy of the biocontrol agent. The random forest (RF) algorithm was used to classify variable infestation levels for 6 weeks using: (1) all the hyperspectral bands, (2) bands selected by the recursive feature elimination (RFE) algorithm, and (3) bands selected by the Boruta algorithm. Results showed that the RF model using all the bands successfully produced low-classification errors (12.50% to 32.29%) for all 6 weeks. However, the RF model using Boruta selected bands produced lower classification errors (8.33% to 15.62%) than the RF model using all the bands or bands selected by the RFE algorithm (11.25% to 21.25%) for all 6 weeks, highlighting the utility of Boruta as an all relevant band selection algorithm. All relevant bands selected by Boruta included: 352, 754, 770, 771, 775, 781, 782, 783, 786, and 789 nm. It was concluded that RF coupled with Boruta band-selection algorithm can be utilized to undertake multitemporal monitoring of variable infestation levels on water hyacinth plants.

  19. Segmentation of neuronal structures using SARSA (λ)-based boundary amendment with reinforced gradient-descent curve shape fitting.

    PubMed

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong

    2014-01-01

    The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases.

  20. Segmentation of Neuronal Structures Using SARSA (λ)-Based Boundary Amendment with Reinforced Gradient-Descent Curve Shape Fitting

    PubMed Central

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong

    2014-01-01

    The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases. PMID:24625699

  1. Communication of ALS Patients by Detecting Event-Related Potential

    NASA Astrophysics Data System (ADS)

    Kanou, Naoyuki; Sakuma, Kenji; Nakashima, Kenji

    Amyotrophic Lateral Sclerosis(ALS) patients are unable to successfully communicate their desires, although their mental capacity is the same as non-affected persons. Therefore, the authors put emphasis on Event-Related Potential(ERP) which elicits the highest outcome for the target visual and hearing stimuli. P300 is one component of ERP. It is positive potential that is elicited when the subject focuses attention on stimuli that appears infrequently. In this paper, the authors focused on P200 and N200 components, in addition to P300, for their great improvement in the rate of correct judgment in the target word-specific experiment. Hence the authors propose the algorithm that specifies target words by detecting these three components. Ten healthy subjects and ALS patient underwent the experiment in which a target word out of five words, was specified by this algorithm. The rates of correct judgment in nine of ten healthy subjects were more than 90.0%. The highest rate was 99.7%. The highest rate of ALS patient was 100.0%. Through these results, the authors found the possibility that ALS patients could communicate with surrounding persons by detecting ERP(P200, N200 and P300) as their desire.

  2. Experimental determination of pore shapes using phase retrieval from q -space NMR diffraction

    NASA Astrophysics Data System (ADS)

    Demberg, Kerstin; Laun, Frederik Bernd; Bertleff, Marco; Bachert, Peter; Kuder, Tristan Anselm

    2018-05-01

    This paper presents an approach to solving the phase problem in nuclear magnetic resonance (NMR) diffusion pore imaging, a method that allows imaging the shape of arbitrary closed pores filled with an NMR-detectable medium for investigation of the microstructure of biological tissue and porous materials. Classical q -space imaging composed of two short diffusion-encoding gradient pulses yields, analogously to diffraction experiments, the modulus squared of the Fourier transform of the pore image which entails an inversion problem: An unambiguous reconstruction of the pore image requires both magnitude and phase. Here the phase information is recovered from the Fourier modulus by applying a phase retrieval algorithm. This allows omitting experimentally challenging phase measurements using specialized temporal gradient profiles. A combination of the hybrid input-output algorithm and the error reduction algorithm was used with dynamically adapting support (shrinkwrap extension). No a priori knowledge on the pore shape was fed to the algorithm except for a finite pore extent. The phase retrieval approach proved successful for simulated data with and without noise and was validated in phantom experiments with well-defined pores using hyperpolarized xenon gas.

  3. Experimental determination of pore shapes using phase retrieval from q-space NMR diffraction.

    PubMed

    Demberg, Kerstin; Laun, Frederik Bernd; Bertleff, Marco; Bachert, Peter; Kuder, Tristan Anselm

    2018-05-01

    This paper presents an approach to solving the phase problem in nuclear magnetic resonance (NMR) diffusion pore imaging, a method that allows imaging the shape of arbitrary closed pores filled with an NMR-detectable medium for investigation of the microstructure of biological tissue and porous materials. Classical q-space imaging composed of two short diffusion-encoding gradient pulses yields, analogously to diffraction experiments, the modulus squared of the Fourier transform of the pore image which entails an inversion problem: An unambiguous reconstruction of the pore image requires both magnitude and phase. Here the phase information is recovered from the Fourier modulus by applying a phase retrieval algorithm. This allows omitting experimentally challenging phase measurements using specialized temporal gradient profiles. A combination of the hybrid input-output algorithm and the error reduction algorithm was used with dynamically adapting support (shrinkwrap extension). No a priori knowledge on the pore shape was fed to the algorithm except for a finite pore extent. The phase retrieval approach proved successful for simulated data with and without noise and was validated in phantom experiments with well-defined pores using hyperpolarized xenon gas.

  4. Autoregressive-moving-average hidden Markov model for vision-based fall prediction-An application for walker robot.

    PubMed

    Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro

    2017-01-01

    Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.

  5. Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.

    PubMed

    Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen

    2014-01-01

    Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.

  6. Machine Learning Methods for Attack Detection in the Smart Grid.

    PubMed

    Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent

    2016-08-01

    Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.

  7. Low-complexity R-peak detection for ambulatory fetal monitoring.

    PubMed

    Rooijakkers, Michael J; Rabotti, Chiara; Oei, S Guid; Mischi, Massimo

    2012-07-01

    Non-invasive fetal health monitoring during pregnancy is becoming increasingly important because of the increasing number of high-risk pregnancies. Despite recent advances in signal-processing technology, which have enabled fetal monitoring during pregnancy using abdominal electrocardiogram (ECG) recordings, ubiquitous fetal health monitoring is still unfeasible due to the computational complexity of noise-robust solutions. In this paper, an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, without reducing the R-peak detection performance compared to the existing R-peak detection schemes. Validation of the algorithm is performed on three manually annotated datasets. With a detection error rate of 0.23%, 1.32% and 9.42% on the MIT/BIH Arrhythmia and in-house maternal and fetal databases, respectively, the detection rate of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.

  8. A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun

    2014-11-01

    In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.

  9. Use of the preconditioned conjugate gradient algorithm as a generic solver for mixed-model equations in animal breeding applications.

    PubMed

    Tsuruta, S; Misztal, I; Strandén, I

    2001-05-01

    Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.

  10. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    PubMed

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    DTIC Science & Technology

    2018-01-01

    ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a

  12. An Optimized Hidden Node Detection Paradigm for Improving the Coverage and Network Efficiency in Wireless Multimedia Sensor Networks

    PubMed Central

    Alanazi, Adwan; Elleithy, Khaled

    2016-01-01

    Successful transmission of online multimedia streams in wireless multimedia sensor networks (WMSNs) is a big challenge due to their limited bandwidth and power resources. The existing WSN protocols are not completely appropriate for multimedia communication. The effectiveness of WMSNs varies, and it depends on the correct location of its sensor nodes in the field. Thus, maximizing the multimedia coverage is the most important issue in the delivery of multimedia contents. The nodes in WMSNs are either static or mobile. Thus, the node connections change continuously due to the mobility in wireless multimedia communication that causes an additional energy consumption, and synchronization loss between neighboring nodes. In this paper, we introduce an Optimized Hidden Node Detection (OHND) paradigm. The OHND consists of three phases: hidden node detection, message exchange, and location detection. These three phases aim to maximize the multimedia node coverage, and improve energy efficiency, hidden node detection capacity, and packet delivery ratio. OHND helps multimedia sensor nodes to compute the directional coverage. Furthermore, an OHND is used to maintain a continuous node– continuous neighbor discovery process in order to handle the mobility of the nodes. We implement our proposed algorithms by using a network simulator (NS2). The simulation results demonstrate that nodes are capable of maintaining direct coverage and detecting hidden nodes in order to maximize coverage and multimedia node mobility. To evaluate the performance of our proposed algorithms, we compared our results with other known approaches. PMID:27618048

  13. An Optimized Hidden Node Detection Paradigm for Improving the Coverage and Network Efficiency in Wireless Multimedia Sensor Networks.

    PubMed

    Alanazi, Adwan; Elleithy, Khaled

    2016-09-07

    Successful transmission of online multimedia streams in wireless multimedia sensor networks (WMSNs) is a big challenge due to their limited bandwidth and power resources. The existing WSN protocols are not completely appropriate for multimedia communication. The effectiveness of WMSNs varies, and it depends on the correct location of its sensor nodes in the field. Thus, maximizing the multimedia coverage is the most important issue in the delivery of multimedia contents. The nodes in WMSNs are either static or mobile. Thus, the node connections change continuously due to the mobility in wireless multimedia communication that causes an additional energy consumption, and synchronization loss between neighboring nodes. In this paper, we introduce an Optimized Hidden Node Detection (OHND) paradigm. The OHND consists of three phases: hidden node detection, message exchange, and location detection. These three phases aim to maximize the multimedia node coverage, and improve energy efficiency, hidden node detection capacity, and packet delivery ratio. OHND helps multimedia sensor nodes to compute the directional coverage. Furthermore, an OHND is used to maintain a continuous node- continuous neighbor discovery process in order to handle the mobility of the nodes. We implement our proposed algorithms by using a network simulator (NS2). The simulation results demonstrate that nodes are capable of maintaining direct coverage and detecting hidden nodes in order to maximize coverage and multimedia node mobility. To evaluate the performance of our proposed algorithms, we compared our results with other known approaches.

  14. Robust multiperson detection and tracking for mobile service and social robots.

    PubMed

    Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou

    2012-10-01

    This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.

  15. Object detection approach using generative sparse, hierarchical networks with top-down and lateral connections for combining texture/color detection and shape/contour detection

    DOEpatents

    Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.; Schultz, Peter F.; George, John S.

    2015-07-28

    An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.

  16. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  17. Spectrum sensing algorithm based on autocorrelation energy in cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Ren, Shengwei; Zhang, Li; Zhang, Shibing

    2016-10-01

    Cognitive radio networks have wide applications in the smart home, personal communications and other wireless communication. Spectrum sensing is the main challenge in cognitive radios. This paper proposes a new spectrum sensing algorithm which is based on the autocorrelation energy of signal received. By taking the autocorrelation energy of the received signal as the statistics of spectrum sensing, the effect of the channel noise on the detection performance is reduced. Simulation results show that the algorithm is effective and performs well in low signal-to-noise ratio. Compared with the maximum generalized eigenvalue detection (MGED) algorithm, function of covariance matrix based detection (FMD) algorithm and autocorrelation-based detection (AD) algorithm, the proposed algorithm has 2 11 dB advantage.

  18. Lining seam elimination algorithm and surface crack detection in concrete tunnel lining

    NASA Astrophysics Data System (ADS)

    Qu, Zhong; Bai, Ling; An, Shi-Quan; Ju, Fang-Rong; Liu, Ling

    2016-11-01

    Due to the particularity of the surface of concrete tunnel lining and the diversity of detection environments such as uneven illumination, smudges, localized rock falls, water leakage, and the inherent seams of the lining structure, existing crack detection algorithms cannot detect real cracks accurately. This paper proposed an algorithm that combines lining seam elimination with the improved percolation detection algorithm based on grid cell analysis for surface crack detection in concrete tunnel lining. First, check the characteristics of pixels within the overlapping grid to remove the background noise and generate the percolation seed map (PSM). Second, cracks are detected based on the PSM by the accelerated percolation algorithm so that the fracture unit areas can be scanned and connected. Finally, the real surface cracks in concrete tunnel lining can be obtained by removing the lining seam and performing percolation denoising. Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks. Furthermore, it can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.

  19. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  20. Using Distance Sensors to Perform Collision Avoidance Maneuvres on Uav Applications

    NASA Astrophysics Data System (ADS)

    Raimundo, A.; Peres, D.; Santos, N.; Sebastião, P.; Souto, N.

    2017-08-01

    The Unmanned Aerial Vehicles (UAV) and its applications are growing for both civilian and military purposes. The operability of an UAV proved that some tasks and operations can be done easily and at a good cost-efficiency ratio. Nowadays, an UAV can perform autonomous missions. It is very useful to certain UAV applications, such as meteorology, vigilance systems, agriculture, environment mapping and search and rescue operations. One of the biggest problems that an UAV faces is the possibility of collision with other objects in the flight area. To avoid this, an algorithm was developed and implemented in order to prevent UAV collision with other objects. "Sense and Avoid" algorithm was developed as a system for UAVs to avoid objects in collision course. This algorithm uses a Light Detection and Ranging (LiDAR), to detect objects facing the UAV in mid-flights. This light sensor is connected to an on-board hardware, Pixhawk's flight controller, which interfaces its communications with another hardware: Raspberry Pi. Communications between Ground Control Station and UAV are made via Wi-Fi or cellular third or fourth generation (3G/4G). Some tests were made in order to evaluate the "Sense and Avoid" algorithm's overall performance. These tests were done in two different environments: A 3D simulated environment and a real outdoor environment. Both modes worked successfully on a simulated 3D environment, and "Brake" mode on a real outdoor, proving its concepts.

  1. A community detection algorithm based on structural similarity

    NASA Astrophysics Data System (ADS)

    Guo, Xuchao; Hao, Xia; Liu, Yaqiong; Zhang, Li; Wang, Lu

    2017-09-01

    In order to further improve the efficiency and accuracy of community detection algorithm, a new algorithm named SSTCA (the community detection algorithm based on structural similarity with threshold) is proposed. In this algorithm, the structural similarities are taken as the weights of edges, and the threshold k is considered to remove multiple edges whose weights are less than the threshold, and improve the computational efficiency. Tests were done on the Zachary’s network, Dolphins’ social network and Football dataset by the proposed algorithm, and compared with GN and SSNCA algorithm. The results show that the new algorithm is superior to other algorithms in accuracy for the dense networks and the operating efficiency is improved obviously.

  2. Nuclear IHC enumeration: A digital phantom to evaluate the performance of automated algorithms in digital pathology.

    PubMed

    Niazi, Muhammad Khalid Khan; Abas, Fazly Salleh; Senaras, Caglar; Pennell, Michael; Sahiner, Berkman; Chen, Weijie; Opfer, John; Hasserjian, Robert; Louissaint, Abner; Shana'ah, Arwa; Lozanski, Gerard; Gurcan, Metin N

    2018-01-01

    Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.

  3. Finding Cardinality Heavy-Hitters in Massive Traffic Data and Its Application to Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Ishibashi, Keisuke; Mori, Tatsuya; Kawahara, Ryoichi; Hirokawa, Yutaka; Kobayashi, Atsushi; Yamamoto, Kimihiro; Sakamoto, Hitoaki; Asano, Shoichiro

    We propose an algorithm for finding heavy hitters in terms of cardinality (the number of distinct items in a set) in massive traffic data using a small amount of memory. Examples of such cardinality heavy-hitters are hosts that send large numbers of flows, or hosts that communicate with large numbers of other hosts. Finding these hosts is crucial to the provision of good communication quality because they significantly affect the communications of other hosts via either malicious activities such as worm scans, spam distribution, or botnet control or normal activities such as being a member of a flash crowd or performing peer-to-peer (P2P) communication. To precisely determine the cardinality of a host we need tables of previously seen items for each host (e. g., flow tables for every host) and this may infeasible for a high-speed environment with a massive amount of traffic. In this paper, we use a cardinality estimation algorithm that does not require these tables but needs only a little information called the cardinality summary. This is made possible by relaxing the goal from exact counting to estimation of cardinality. In addition, we propose an algorithm that does not need to maintain the cardinality summary for each host, but only for partitioned addresses of a host. As a result, the required number of tables can be significantly decreased. We evaluated our algorithm using actual backbone traffic data to find the heavy-hitters in the number of flows and estimate the number of these flows. We found that while the accuracy degraded when estimating for hosts with few flows, the algorithm could accurately find the top-100 hosts in terms of the number of flows using a limited-sized memory. In addition, we found that the number of tables required to achieve a pre-defined accuracy increased logarithmically with respect to the total number of hosts, which indicates that our method is applicable for large traffic data for a very large number of hosts. We also introduce an application of our algorithm to anomaly detection. With actual traffic data, our method could successfully detect a sudden network scan.

  4. Detection of dominant flow and abnormal events in surveillance video

    NASA Astrophysics Data System (ADS)

    Kwak, Sooyeong; Byun, Hyeran

    2011-02-01

    We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.

  5. Ambulatory REACT: real-time seizure detection with a DSP microprocessor.

    PubMed

    McEvoy, Robert P; Faul, Stephen; Marnane, William P

    2010-01-01

    REACT (Real-Time EEG Analysis for event deteCTion) is a Support Vector Machine based technology which, in recent years, has been successfully applied to the problem of automated seizure detection in both adults and neonates. This paper describes the implementation of REACT on a commercial DSP microprocessor; the Analog Devices Blackfin®. The primary aim of this work is to develop a prototype system for use in ambulatory or in-ward automated EEG analysis. Furthermore, the complexity of the various stages of the REACT algorithm on the Blackfin processor is analysed; in particular the EEG feature extraction stages. This hardware profile is used to select a reduced, platform-aware feature set, in order to evaluate the seizure classification accuracy of a lower-complexity, lower-power REACT system.

  6. Identifying High-Risk Patients without Labeled Training Data: Anomaly Detection Methodologies to Predict Adverse Outcomes

    PubMed Central

    Syed, Zeeshan; Saeed, Mohammed; Rubinfeld, Ilan

    2010-01-01

    For many clinical conditions, only a small number of patients experience adverse outcomes. Developing risk stratification algorithms for these conditions typically requires collecting large volumes of data to capture enough positive and negative for training. This process is slow, expensive, and may not be appropriate for new phenomena. In this paper, we explore different anomaly detection approaches to identify high-risk patients as cases that lie in sparse regions of the feature space. We study three broad categories of anomaly detection methods: classification-based, nearest neighbor-based, and clustering-based techniques. When evaluated on data from the National Surgical Quality Improvement Program (NSQIP), these methods were able to successfully identify patients at an elevated risk of mortality and rare morbidities following inpatient surgical procedures. PMID:21347083

  7. Detecting fluorescence hot-spots using mosaic maps generated from multimodal endoscope imaging

    NASA Astrophysics Data System (ADS)

    Yang, Chenying; Soper, Timothy D.; Seibel, Eric J.

    2013-03-01

    Fluorescence labeled biomarkers can be detected during endoscopy to guide early cancer biopsies, such as high-grade dysplasia in Barrett's Esophagus. To enhance intraoperative visualization of the fluorescence hot-spots, a mosaicking technique was developed to create full anatomical maps of the lower esophagus and associated fluorescent hot-spots. The resultant mosaic map contains overlaid reflectance and fluorescence images. It can be used to assist biopsy and document findings. The mosaicking algorithm uses reflectance images to calculate image registration between successive frames, and apply this registration to simultaneously acquired fluorescence images. During this mosaicking process, the fluorescence signal is enhanced through multi-frame averaging. Preliminary results showed that the technique promises to enhance the detectability of the hot-spots due to enhanced fluorescence signal.

  8. Quantum machine learning for quantum anomaly detection

    NASA Astrophysics Data System (ADS)

    Liu, Nana; Rebentrost, Patrick

    2018-04-01

    Anomaly detection is used for identifying data that deviate from "normal" data patterns. Its usage on classical data finds diverse applications in many important areas such as finance, fraud detection, medical diagnoses, data cleaning, and surveillance. With the advent of quantum technologies, anomaly detection of quantum data, in the form of quantum states, may become an important component of quantum applications. Machine-learning algorithms are playing pivotal roles in anomaly detection using classical data. Two widely used algorithms are the kernel principal component analysis and the one-class support vector machine. We find corresponding quantum algorithms to detect anomalies in quantum states. We show that these two quantum algorithms can be performed using resources that are logarithmic in the dimensionality of quantum states. For pure quantum states, these resources can also be logarithmic in the number of quantum states used for training the machine-learning algorithm. This makes these algorithms potentially applicable to big quantum data applications.

  9. A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Munoz, Cesar

    2015-01-01

    In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.

  10. Health management system for rocket engines

    NASA Technical Reports Server (NTRS)

    Nemeth, Edward

    1990-01-01

    The functional framework of a failure detection algorithm for the Space Shuttle Main Engine (SSME) is developed. The basic algorithm is based only on existing SSME measurements. Supplemental measurements, expected to enhance failure detection effectiveness, are identified. To support the algorithm development, a figure of merit is defined to estimate the likelihood of SSME criticality 1 failure modes and the failure modes are ranked in order of likelihood of occurrence. Nine classes of failure detection strategies are evaluated and promising features are extracted as the basis for the failure detection algorithm. The failure detection algorithm provides early warning capabilities for a wide variety of SSME failure modes. Preliminary algorithm evaluation, using data from three SSME failures representing three different failure types, demonstrated indications of imminent catastrophic failure well in advance of redline cutoff in all three cases.

  11. Improved Real-Time Scan Matching Using Corner Features

    NASA Astrophysics Data System (ADS)

    Mohamed, H. A.; Moussa, A. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, Abu B.

    2016-06-01

    The automation of unmanned vehicle operation has gained a lot of research attention, in the last few years, because of its numerous applications. The vehicle localization is more challenging in indoor environments where absolute positioning measurements (e.g. GPS) are typically unavailable. Laser range finders are among the most widely used sensors that help the unmanned vehicles to localize themselves in indoor environments. Typically, automatic real-time matching of the successive scans is performed either explicitly or implicitly by any localization approach that utilizes laser range finders. Many accustomed approaches such as Iterative Closest Point (ICP), Iterative Matching Range Point (IMRP), Iterative Dual Correspondence (IDC), and Polar Scan Matching (PSM) handles the scan matching problem in an iterative fashion which significantly affects the time consumption. Furthermore, the solution convergence is not guaranteed especially in cases of sharp maneuvers or fast movement. This paper proposes an automated real-time scan matching algorithm where the matching process is initialized using the detected corners. This initialization step aims to increase the convergence probability and to limit the number of iterations needed to reach convergence. The corner detection is preceded by line extraction from the laser scans. To evaluate the probability of line availability in indoor environments, various data sets, offered by different research groups, have been tested and the mean numbers of extracted lines per scan for these data sets are ranging from 4.10 to 8.86 lines of more than 7 points. The set of all intersections between extracted lines are detected as corners regardless of the physical intersection of these line segments in the scan. To account for the uncertainties of the detected corners, the covariance of the corners is estimated using the extracted lines variances. The detected corners are used to estimate the transformation parameters between the successive scan using least squares. These estimated transformation parameters are used to calculate an adjusted initialization for scan matching process. The presented method can be employed solely to match the successive scans and also can be used to aid other accustomed iterative methods to achieve more effective and faster converge. The performance and time consumption of the proposed approach is compared with ICP algorithm alone without initialization in different scenarios such as static period, fast straight movement, and sharp manoeuvers.

  12. DALMATIAN: An Algorithm for Automatic Cell Detection and Counting in 3D.

    PubMed

    Shuvaev, Sergey A; Lazutkin, Alexander A; Kedrov, Alexander V; Anokhin, Konstantin V; Enikolopov, Grigori N; Koulakov, Alexei A

    2017-01-01

    Current 3D imaging methods, including optical projection tomography, light-sheet microscopy, block-face imaging, and serial two photon tomography enable visualization of large samples of biological tissue. Large volumes of data obtained at high resolution require development of automatic image processing techniques, such as algorithms for automatic cell detection or, more generally, point-like object detection. Current approaches to automated cell detection suffer from difficulties originating from detection of particular cell types, cell populations of different brightness, non-uniformly stained, and overlapping cells. In this study, we present a set of algorithms for robust automatic cell detection in 3D. Our algorithms are suitable for, but not limited to, whole brain regions and individual brain sections. We used watershed procedure to split regional maxima representing overlapping cells. We developed a bootstrap Gaussian fit procedure to evaluate the statistical significance of detected cells. We compared cell detection quality of our algorithm and other software using 42 samples, representing 6 staining and imaging techniques. The results provided by our algorithm matched manual expert quantification with signal-to-noise dependent confidence, including samples with cells of different brightness, non-uniformly stained, and overlapping cells for whole brain regions and individual tissue sections. Our algorithm provided the best cell detection quality among tested free and commercial software.

  13. Multi-object Detection and Discrimination Algorithms

    DTIC Science & Technology

    2015-03-26

    with  an   algorithm  similar  to  a  depth-­‐first   search .   This  stage  of  the   algorithm  is  O(CN).  From...Multi-object Detection and Discrimination Algorithms This document contains an overview of research and work performed and published at the University...of Florida from October 1, 2009 to October 31, 2013 pertaining to proposal 57306CS: Multi-object Detection and Discrimination Algorithms

  14. Real time damage detection using recursive principal components and time varying auto-regressive modeling

    NASA Astrophysics Data System (ADS)

    Krishnan, M.; Bhowmik, B.; Hazra, B.; Pakrashi, V.

    2018-02-01

    In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using Recursive Principal Component Analysis (RPCA) in conjunction with Time Varying Auto-Regressive Modeling (TVAR) is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal components online using rank-one perturbation method, followed by TVAR modeling of the first transformed response, to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/non-linear-states that indicate damage. Most of the works available in the literature deal with algorithms that require windowing of the gathered data owing to their data-driven nature which renders them ineffective for online implementation. Algorithms focussed on mathematically consistent recursive techniques in a rigorous theoretical framework of structural damage detection is missing, which motivates the development of the present framework that is amenable for online implementation which could be utilized along with suite experimental and numerical investigations. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. TVAR modeling on the principal component explaining maximum variance is utilized and the damage is identified by tracking the TVAR coefficients. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data without requiring any baseline data. Numerical simulations performed on a 5-dof nonlinear system under white noise excitation and El Centro (also known as 1940 Imperial Valley earthquake) excitation, for different damage scenarios, demonstrate the robustness of the proposed algorithm. The method is further validated on results obtained from case studies involving experiments performed on a cantilever beam subjected to earthquake excitation; a two-storey benchscale model with a TMD and, data from recorded responses of UCLA factor building demonstrate the efficacy of the proposed methodology as an ideal candidate for real time, reference free structural health monitoring.

  15. Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection

    NASA Astrophysics Data System (ADS)

    Amiri, Ali; Fathy, Mahmood

    2010-12-01

    This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.

  16. Automatic identification of epileptic seizures from EEG signals using linear programming boosting.

    PubMed

    Hassan, Ahnaf Rashik; Subasi, Abdulhamit

    2016-11-01

    Computerized epileptic seizure detection is essential for expediting epilepsy diagnosis and research and for assisting medical professionals. Moreover, the implementation of an epilepsy monitoring device that has low power and is portable requires a reliable and successful seizure detection scheme. In this work, the problem of automated epilepsy seizure detection using singe-channel EEG signals has been addressed. At first, segments of EEG signals are decomposed using a newly proposed signal processing scheme, namely complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN). Six spectral moments are extracted from the CEEMDAN mode functions and train and test matrices are formed afterward. These matrices are fed into the classifier to identify epileptic seizures from EEG signal segments. In this work, we implement an ensemble learning based machine learning algorithm, namely linear programming boosting (LPBoost) to perform classification. The efficacy of spectral features in the CEEMDAN domain is validated by graphical and statistical analyses. The performance of CEEMDAN is compared to those of its predecessors to further inspect its suitability. The effectiveness and the appropriateness of LPBoost are demonstrated as opposed to the commonly used classification models. Resubstitution and 10 fold cross-validation error analyses confirm the superior algorithm performance of the proposed scheme. The algorithmic performance of our epilepsy seizure identification scheme is also evaluated against state-of-the-art works in the literature. Experimental outcomes manifest that the proposed seizure detection scheme performs better than the existing works in terms of accuracy, sensitivity, specificity, and Cohen's Kappa coefficient. It can be anticipated that owing to its use of only one channel of EEG signal, the proposed method will be suitable for device implementation, eliminate the onus of clinicians for analyzing a large bulk of data manually, and expedite epilepsy diagnosis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. A Novel Application for the Detection of an Irregular Pulse using an iPhone 4S in Patients with Atrial Fibrillation

    PubMed Central

    McManus, David D.; Lee, Jinseok; Maitas, Oscar; Esa, Nada; Pidikiti, Rahul; Carlucci, Alex; Harrington, Josephine; Mick, Eric; Chon, Ki H.

    2012-01-01

    Background Atrial fibrillation (AF) is common and associated with adverse health outcomes. Timely detection of AF can be challenging using traditional diagnostic tools. Smartphone use is increasing and may provide an inexpensive and user-friendly means to diagnose AF. Objective To test the hypothesis that a smartphone-based application could detect an irregular pulse from AF. Methods 76 adults with persistent AF were consented for participation in our study. We obtained pulsatile time series recordings before and after cardioversion using an iPhone 4S camera. A novel smartphone application conducted real-time pulse analysis using 2 statistical methods [Root Mean Square of Successive RR Differences (RMSSD/mean); Shannon Entropy (ShE)]. We examined the sensitivity, specificity, and predictive accuracy of both algorithms using the 12-lead electrocardiogram as the gold standard. Results RMSDD/mean and ShE were higher in participants in AF compared with sinus rhythm. The 2 methods were inversely related to AF in regression models adjusting for key factors including heart rate and blood pressure (β coefficients per SD-increment in RMSDD/mean and ShE were −0.20 and −0.35; p<0.001). An algorithm combining the 2 statistical methods demonstrated excellent sensitivity (0.962), specificity (0.975), and accuracy (0.968) for beat-to-beat discrimination of an irregular pulse during AF from sinus rhythm. Conclusions In a prospectively recruited cohort of 76 participants undergoing cardioversion for AF, we found that a novel algorithm analyzing signals recorded using an iPhone 4S accurately distinguished pulse recordings during AF from sinus rhythm. Data are needed to explore the performance and acceptability of smartphone-based applications for AF detection. PMID:23220686

  18. A novel application for the detection of an irregular pulse using an iPhone 4S in patients with atrial fibrillation.

    PubMed

    McManus, David D; Lee, Jinseok; Maitas, Oscar; Esa, Nada; Pidikiti, Rahul; Carlucci, Alex; Harrington, Josephine; Mick, Eric; Chon, Ki H

    2013-03-01

    Atrial fibrillation (AF) is common and associated with adverse health outcomes. Timely detection of AF can be challenging using traditional diagnostic tools. Smartphone use is increasing and may provide an inexpensive and user-friendly means to diagnoseAF. To test the hypothesis that a smartphone-based application could detect an irregular pulse fromAF. Seventy-six adults with persistent AF were consented for participation in our study. We obtained pulsatile time series recordings before and after cardioversion using an iPhone 4S camera. A novel smartphone application conducted real-time pulse analysis using 2 statistical methods: root mean square of successive RR difference (RMSSD/mean) and Shannon entropy (ShE). We examined the sensitivity, specificity, and predictive accuracy of both algorithms using the 12-lead electrocardiogram as the gold standard. RMSDD/mean and ShE were higher in participants in AF than in those with sinus rhythm. The 2 methods were inversely related to AF in regression models adjusting for key factors including heart rate and blood pressure (beta coefficients per SD increment in RMSDD/mean and ShE were-0.20 and-0.35; P<.001). An algorithm combining the 2 statistical methods demonstrated excellent sensitivity (0.962), specificity (0.975), and accuracy (0.968) for beat-to-beat discrimination of an irregular pulse during AF from sinus rhythm. In a prospectively recruited cohort of 76 participants undergoing cardioversion for AF, we found that a novel algorithm analyzing signals recorded using an iPhone 4S accurately distinguished pulse recordings during AF from sinus rhythm. Data are needed to explore the performance and acceptability of smartphone-based applications for AF detection. Copyright © 2013 Heart Rhythm Society. All rights reserved.

  19. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  20. Object detection approach using generative sparse, hierarchical networks with top-down and lateral connections for combining texture/color detection and shape/contour detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.

    An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using amore » combinatorial algorithm.« less

  1. Gas leak detection in infrared video with background modeling

    NASA Astrophysics Data System (ADS)

    Zeng, Xiaoxia; Huang, Likun

    2018-03-01

    Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.

  2. Network intrusion detection by the coevolutionary immune algorithm of artificial immune systems with clonal selection

    NASA Astrophysics Data System (ADS)

    Salamatova, T.; Zhukov, V.

    2017-02-01

    The paper presents the application of the artificial immune systems apparatus as a heuristic method of network intrusion detection for algorithmic provision of intrusion detection systems. The coevolutionary immune algorithm of artificial immune systems with clonal selection was elaborated. In testing different datasets the empirical results of evaluation of the algorithm effectiveness were achieved. To identify the degree of efficiency the algorithm was compared with analogs. The fundamental rules based of solutions generated by this algorithm are described in the article.

  3. Benchmarking real-time RGBD odometry for light-duty UAVs

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Sahawneh, Laith R.; Brink, Kevin M.

    2016-06-01

    This article describes the theoretical and implementation challenges associated with generating 3D odometry estimates (delta-pose) from RGBD sensor data in real-time to facilitate navigation in cluttered indoor environments. The underlying odometry algorithm applies to general 6DoF motion; however, the computational platforms, trajectories, and scene content are motivated by their intended use on indoor, light-duty UAVs. Discussion outlines the overall software pipeline for sensor processing and details how algorithm choices for the underlying feature detection and correspondence computation impact the real-time performance and accuracy of the estimated odometry and associated covariance. This article also explores the consistency of odometry covariance estimates and the correlation between successive odometry estimates. The analysis is intended to provide users information needed to better leverage RGBD odometry within the constraints of their systems.

  4. Method to analyze remotely sensed spectral data

    DOEpatents

    Stork, Christopher L [Albuquerque, NM; Van Benthem, Mark H [Middletown, DE

    2009-02-17

    A fast and rigorous multivariate curve resolution (MCR) algorithm is applied to remotely sensed spectral data. The algorithm is applicable in the solar-reflective spectral region, comprising the visible to the shortwave infrared (ranging from approximately 0.4 to 2.5 .mu.m), midwave infrared, and thermal emission spectral region, comprising the thermal infrared (ranging from approximately 8 to 15 .mu.m). For example, employing minimal a priori knowledge, notably non-negativity constraints on the extracted endmember profiles and a constant abundance constraint for the atmospheric upwelling component, MCR can be used to successfully compensate thermal infrared hyperspectral images for atmospheric upwelling and, thereby, transmittance effects. Further, MCR can accurately estimate the relative spectral absorption coefficients and thermal contrast distribution of a gas plume component near the minimum detectable quantity.

  5. Change detection using landsat time series: A review of frequencies, preprocessing, algorithms, and applications

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe

    2017-08-01

    The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.

  6. Vehicle Localization by LIDAR Point Correlation Improved by Change Detection

    NASA Astrophysics Data System (ADS)

    Schlichting, A.; Brenner, C.

    2016-06-01

    LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position) and 0.06° (heading), and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany.

  7. From Data to Knowledge — Faster: GOES Early Fire Detection System to Inform Operational Wildfire Response and Management

    NASA Astrophysics Data System (ADS)

    Koltunov, A.; Quayle, B.; Prins, E. M.; Ambrosia, V. G.; Ustin, S.

    2014-12-01

    Fire managers at various levels require near-real-time, low-cost, systematic, and reliable early detection capabilities with minimal latency to effectively respond to wildfire ignitions and minimize the risk of catastrophic development. The GOES satellite images collected for vast territories at high temporal frequencies provide a consistent and reliable source for operational active fire mapping realized by the WF-ABBA algorithm. However, their potential to provide early warning or rapid confirmation of initial fire ignition reports from conventional sources remains underutilized, partly because the operational wildfire detection has been successfully optimized for users and applications for which timeliness of initial detection is a low priority, contrasting to the needs of first responders. We present our progress in developing the GOES Early Fire Detection (GOES-EFD) system, a collaborative effort led by University of California-Davis and USDA Forest Service. The GOES-EFD specifically focuses on first detection timeliness for wildfire incidents. It is automatically trained for a monitored scene and capitalizes on multiyear cross-disciplinary algorithm research. Initial retrospective tests in Western US demonstrate significantly earlier identification detection of new ignitions than existing operational capabilities and a further improvement prospect. The GOES-EFD-β prototype will be initially deployed for the Western US region to process imagery from GOES-NOP and the rapid and 4 times higher spatial resolution imagery from GOES-R — the upcoming next generation of GOES satellites. These and other enhanced capabilities of GOES-R are expected to significantly improve the timeliness of fire ignition information from GOES-EFD.

  8. Identification of Matra Region and Overlapping Characters for OCR of Printed Bengali Scripts

    NASA Astrophysics Data System (ADS)

    Goswami, Subhra Sundar

    One of the important reasons for poor recognition rate in optical character recognition (OCR) system is the error in character segmentation. In case of Bangla scripts, the errors occur due to several reasons, which include incorrect detection of matra (headline), over-segmentation and under-segmentation. We have proposed a robust method for detecting the headline region. Existence of overlapping characters (in under-segmented parts) in scanned printed documents is a major problem in designing an effective character segmentation procedure for OCR systems. In this paper, a predictive algorithm is developed for effectively identifying overlapping characters and then selecting the cut-borders for segmentation. Our method can be successfully used in achieving high recognition result.

  9. Sparse signal representation and its applications in ultrasonic NDE.

    PubMed

    Zhang, Guang-Ming; Zhang, Cheng-Zhong; Harvey, David M

    2012-03-01

    Many sparse signal representation (SSR) algorithms have been developed in the past decade. The advantages of SSR such as compact representations and super resolution lead to the state of the art performance of SSR for processing ultrasonic non-destructive evaluation (NDE) signals. Choosing a suitable SSR algorithm and designing an appropriate overcomplete dictionary is a key for success. After a brief review of sparse signal representation methods and the design of overcomplete dictionaries, this paper addresses the recent accomplishments of SSR for processing ultrasonic NDE signals. The advantages and limitations of SSR algorithms and various overcomplete dictionaries widely-used in ultrasonic NDE applications are explored in depth. Their performance improvement compared to conventional signal processing methods in many applications such as ultrasonic flaw detection and noise suppression, echo separation and echo estimation, and ultrasonic imaging is investigated. The challenging issues met in practical ultrasonic NDE applications for example the design of a good dictionary are discussed. Representative experimental results are presented for demonstration. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies inmore » smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.« less

  11. Registration of central paths and colonic polyps between supine and prone scans in computed tomography colonography: Pilot study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Ping; Napel, Sandy; Acar, Burak

    2004-10-01

    Computed tomography colonography (CTC) is a minimally invasive method that allows the evaluation of the colon wall from CT sections of the abdomen/pelvis. The primary goal of CTC is to detect colonic polyps, precursors to colorectal cancer. Because imperfect cleansing and distension can cause portions of the colon wall to be collapsed, covered with water, and/or covered with retained stool, patients are scanned in both prone and supine positions. We believe that both reading efficiency and computer aided detection (CAD) of CTC images can be improved by accurate registration of data from the supine and prone positions. We developed amore » two-stage approach that first registers the colonic central paths using a heuristic and automated algorithm and then matches polyps or polyp candidates (CAD hits) by a statistical approach. We evaluated the registration algorithm on 24 patient cases. After path registration, the mean misalignment distance between prone and supine identical anatomic landmarks was reduced from 47.08 to 12.66 mm, a 73% improvement. The polyp registration algorithm was specifically evaluated using eight patient cases for which radiologists identified polyps separately for both supine and prone data sets, and then manually registered corresponding pairs. The algorithm correctly matched 78% of these pairs without user input. The algorithm was also applied to the 30 highest-scoring CAD hits in the prone and supine scans and showed a success rate of 50% in automatically registering corresponding polyp pairs. Finally, we computed the average number of CAD hits that need to be manually compared in order to find the correct matches among the top 30 CAD hits. With polyp registration, the average number of comparisons was 1.78 per polyp, as opposed to 4.28 comparisons without polyp registration.« less

  12. Research on infrared dim-point target detection and tracking under sea-sky-line complex background

    NASA Astrophysics Data System (ADS)

    Dong, Yu-xing; Li, Yan; Zhang, Hai-bo

    2011-08-01

    Target detection and tracking technology in infrared image is an important part of modern military defense system. Infrared dim-point targets detection and recognition under complex background is a difficulty and important strategic value and challenging research topic. The main objects that carrier-borne infrared vigilance system detected are sea-skimming aircrafts and missiles. Due to the characteristics of wide field of view of vigilance system, the target is usually under the sea clutter. Detection and recognition of the target will be taken great difficulties .There are some traditional point target detection algorithms, such as adaptive background prediction detecting method. When background has dispersion-decreasing structure, the traditional target detection algorithms would be more useful. But when the background has large gray gradient, such as sea-sky-line, sea waves etc .The bigger false-alarm rate will be taken in these local area .It could not obtain satisfactory results. Because dim-point target itself does not have obvious geometry or texture feature ,in our opinion , from the perspective of mathematics, the detection of dim-point targets in image is about singular function analysis .And from the perspective image processing analysis , the judgment of isolated singularity in the image is key problem. The foregoing points for dim-point targets detection, its essence is a separation of target and background of different singularity characteristics .The image from infrared sensor usually accompanied by different kinds of noise. These external noises could be caused by the complicated background or from the sensor itself. The noise might affect target detection and tracking. Therefore, the purpose of the image preprocessing is to reduce the effects from noise, also to raise the SNR of image, and to increase the contrast of target and background. According to the low sea-skimming infrared flying small target characteristics , the median filter is used to eliminate noise, improve signal-to-noise ratio, then the multi-point multi-storey vertical Sobel algorithm will be used to detect the sea-sky-line ,so that we can segment sea and sky in the image. Finally using centroid tracking method to capture and trace target. This method has been successfully used to trace target under the sea-sky complex background.

  13. Automatic detection and severity measurement of eczema using image processing.

    PubMed

    Alam, Md Nafiul; Munia, Tamanna Tabassum Khan; Tavakolian, Kouhyar; Vasefi, Fartash; MacKinnon, Nick; Fazel-Rezai, Reza

    2016-08-01

    Chronic skin diseases like eczema may lead to severe health and financial consequences for patients if not detected and controlled early. Early measurement of disease severity, combined with a recommendation for skin protection and use of appropriate medication can prevent the disease from worsening. Current diagnosis can be costly and time-consuming. In this paper, an automatic eczema detection and severity measurement model are presented using modern image processing and computer algorithm. The system can successfully detect regions of eczema and classify the identified region as mild or severe based on image color and texture feature. Then the model automatically measures skin parameters used in the most common assessment tool called "Eczema Area and Severity Index (EASI)," by computing eczema affected area score, eczema intensity score, and body region score of eczema allowing both patients and physicians to accurately assess the affected skin.

  14. Residual Error Based Anomaly Detection Using Auto-Encoder in SMD Machine Sound.

    PubMed

    Oh, Dong Yul; Yun, Il Dong

    2018-04-24

    Detecting an anomaly or an abnormal situation from given noise is highly useful in an environment where constantly verifying and monitoring a machine is required. As deep learning algorithms are further developed, current studies have focused on this problem. However, there are too many variables to define anomalies, and the human annotation for a large collection of abnormal data labeled at the class-level is very labor-intensive. In this paper, we propose to detect abnormal operation sounds or outliers in a very complex machine along with reducing the data-driven annotation cost. The architecture of the proposed model is based on an auto-encoder, and it uses the residual error, which stands for its reconstruction quality, to identify the anomaly. We assess our model using Surface-Mounted Device (SMD) machine sound, which is very complex, as experimental data, and state-of-the-art performance is successfully achieved for anomaly detection.

  15. Closed Loop Active Flow Separation Detection and Control in a Multistage Compressor

    NASA Technical Reports Server (NTRS)

    Bright, Michelle M.; Culley, Dennis E.; Braunscheidel, Edward P.; Welch, Gerard E.

    2005-01-01

    Active closed loop flow control was successfully demonstrated on a full annulus of stator vanes in a low speed axial compressor. Two independent methods of detecting separated flow conditions on the vane suction surface were developed. The first technique detects changes in static pressure along the vane suction surface, while the second method monitors variation in the potential field of the downstream rotor. Both methods may feasibly be used in future engines employing embedded flow control technology. In response to the detection of separated conditions, injection along the suction surface of each vane was used. Injected mass flow on the suction surface of stator vanes is known to reduce separation and the resulting limitation on static pressure rise due to lowered diffusion in the vane passage. A control algorithm was developed which provided a proportional response of the injected mass flow to the degree of separation, thereby minimizing the performance penalty on the compressor system.

  16. Early Results from the Global Precipitation Measurement (GPM) Mission in Japan

    NASA Astrophysics Data System (ADS)

    Kachi, Misako; Kubota, Takuji; Masaki, Takeshi; Kaneko, Yuki; Kanemaru, Kaya; Oki, Riko; Iguchi, Toshio; Nakamura, Kenji; Takayabu, Yukari N.

    2015-04-01

    The Global Precipitation Measurement (GPM) mission is an international collaboration to achieve highly accurate and highly frequent global precipitation observations. The GPM mission consists of the GPM Core Observatory jointly developed by U.S. and Japan and Constellation Satellites that carry microwave radiometers and provided by the GPM partner agencies. The Dual-frequency Precipitation Radar (DPR) was developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and installed on the GPM Core Observatory. The GPM Core Observatory chooses a non-sun-synchronous orbit to carry on diurnal cycle observations of rainfall from the Tropical Rainfall Measuring Mission (TRMM) satellite and was successfully launched at 3:37 a.m. on February 28, 2014 (JST), while the Constellation Satellites, including JAXA's Global Change Observation Mission (GCOM) - Water (GCOM-W1) or "SHIZUKU," are launched by each partner agency sometime around 2014 and contribute to expand observation coverage and increase observation frequency JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and DPR-GMI combined Level2 algorithms. JAXA also develops the Global Rainfall Map (GPM-GSMaP) algorithm, which is a latest version of the Global Satellite Mapping of Precipitation (GSMaP), as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. Major improvements in the GPM-GSMaP algorithm is; 1) improvements in microwave imager algorithm based on AMSR2 precipitation standard algorithm, including new land algorithm, new coast detection scheme; 2) Development of orographic rainfall correction method for warm rainfall in coastal area (Taniguchi et al., 2012); 3) Update of database, including rainfall detection over land and land surface emission database; 4) Development of microwave sounder algorithm over land (Kida et al., 2012); and 5) Development of gauge-calibrated GSMaP algorithm (Ushio et al., 2013). In addition to those improvements in the algorithms number of passive microwave imagers and/or sounders used in the GPM-GSMaP was increased compared to the previous version. After the early calibration and validation of the products and evaluation that all products achieved the release criteria, all GPM standard products and the GPM-GSMaP product has been released to the public since September 2014. The GPM products can be downloaded via the internet through the JAXA G-Portal (https://www.gportal.jaxa.jp).

  17. Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; LeCalvez, J.; Raymer, D.

    2017-12-01

    Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic synthetic data. The likelihood function formed by both DAS and geophone behaves as expected with the aperture dynamically selected depending on the SNR of the event. We conclude that this algorithm can be successfully applied for such hybrid arrays to monitor microseismic activity. A study using a recently acquired dataset is planned.

  18. Comparison of human observer and algorithmic target detection in nonurban forward-looking infrared imagery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.

    2005-07-01

    We have performed an experiment that compares the performance of human observers with that of a robust algorithm for the detection of targets in difficult, nonurban forward-looking infrared imagery. Our purpose was to benchmark the comparison and document performance differences for future algorithm improvement. The scale-insensitive detection algorithm, used as a benchmark by the Night Vision Electronic Sensors Directorate for algorithm evaluation, employed a combination of contrastlike features to locate targets. Detection receiver operating characteristic curves and observer-confidence analyses were used to compare human and algorithmic responses and to gain insight into differences. The test database contained ground targets, in natural clutter, whose detectability, as judged by human observers, ranged from easy to very difficult. In general, as compared with human observers, the algorithm detected most of the same targets, but correlated confidence with correct detections poorly and produced many more false alarms at any useful level of performance. Though characterizing human performance was not the intent of this study, results suggest that previous observational experience was not a strong predictor of human performance, and that combining individual human observations by majority vote significantly reduced false-alarm rates.

  19. SA-SOM algorithm for detecting communities in complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Luogeng; Wang, Yanran; Huang, Xiaoming; Hu, Mengyu; Hu, Fang

    2017-10-01

    Currently, community detection is a hot topic. This paper, based on the self-organizing map (SOM) algorithm, introduced the idea of self-adaptation (SA) that the number of communities can be identified automatically, a novel algorithm SA-SOM of detecting communities in complex networks is proposed. Several representative real-world networks and a set of computer-generated networks by LFR-benchmark are utilized to verify the accuracy and the efficiency of this algorithm. The experimental findings demonstrate that this algorithm can identify the communities automatically, accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity, NMI and density than the SOM algorithm does.

  20. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  1. A novel adaptive, real-time algorithm to detect gait events from wearable sensors.

    PubMed

    Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona

    2015-05-01

    A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices.

  2. Surveying alignment-free features for Ortholog detection in related yeast proteomes by using supervised big data classifiers.

    PubMed

    Galpert, Deborah; Fernández, Alberto; Herrera, Francisco; Antunes, Agostinho; Molina-Ruiz, Reinaldo; Agüero-Chapin, Guillermin

    2018-05-03

    The development of new ortholog detection algorithms and the improvement of existing ones are of major importance in functional genomics. We have previously introduced a successful supervised pairwise ortholog classification approach implemented in a big data platform that considered several pairwise protein features and the low ortholog pair ratios found between two annotated proteomes (Galpert, D et al., BioMed Research International, 2015). The supervised models were built and tested using a Saccharomycete yeast benchmark dataset proposed by Salichos and Rokas (2011). Despite several pairwise protein features being combined in a supervised big data approach; they all, to some extent were alignment-based features and the proposed algorithms were evaluated on a unique test set. Here, we aim to evaluate the impact of alignment-free features on the performance of supervised models implemented in the Spark big data platform for pairwise ortholog detection in several related yeast proteomes. The Spark Random Forest and Decision Trees with oversampling and undersampling techniques, and built with only alignment-based similarity measures or combined with several alignment-free pairwise protein features showed the highest classification performance for ortholog detection in three yeast proteome pairs. Although such supervised approaches outperformed traditional methods, there were no significant differences between the exclusive use of alignment-based similarity measures and their combination with alignment-free features, even within the twilight zone of the studied proteomes. Just when alignment-based and alignment-free features were combined in Spark Decision Trees with imbalance management, a higher success rate (98.71%) within the twilight zone could be achieved for a yeast proteome pair that underwent a whole genome duplication. The feature selection study showed that alignment-based features were top-ranked for the best classifiers while the runners-up were alignment-free features related to amino acid composition. The incorporation of alignment-free features in supervised big data models did not significantly improve ortholog detection in yeast proteomes regarding the classification qualities achieved with just alignment-based similarity measures. However, the similarity of their classification performance to that of traditional ortholog detection methods encourages the evaluation of other alignment-free protein pair descriptors in future research.

  3. Multisensor-based real-time quality monitoring by means of feature extraction, selection and modeling for Al alloy in arc welding

    NASA Astrophysics Data System (ADS)

    Zhang, Zhifen; Chen, Huabin; Xu, Yanling; Zhong, Jiyong; Lv, Na; Chen, Shanben

    2015-08-01

    Multisensory data fusion-based online welding quality monitoring has gained increasing attention in intelligent welding process. This paper mainly focuses on the automatic detection of typical welding defect for Al alloy in gas tungsten arc welding (GTAW) by means of analzing arc spectrum, sound and voltage signal. Based on the developed algorithms in time and frequency domain, 41 feature parameters were successively extracted from these signals to characterize the welding process and seam quality. Then, the proposed feature selection approach, i.e., hybrid fisher-based filter and wrapper was successfully utilized to evaluate the sensitivity of each feature and reduce the feature dimensions. Finally, the optimal feature subset with 19 features was selected to obtain the highest accuracy, i.e., 94.72% using established classification model. This study provides a guideline for feature extraction, selection and dynamic modeling based on heterogeneous multisensory data to achieve a reliable online defect detection system in arc welding.

  4. Neural-net-based image matching

    NASA Astrophysics Data System (ADS)

    Jerebko, Anna K.; Barabanov, Nikita E.; Luciv, Vadim R.; Allinson, Nigel M.

    2000-04-01

    The paper describes a neural-based method for matching spatially distorted image sets. The matching of partially overlapping images is important in many applications-- integrating information from images formed from different spectral ranges, detecting changes in a scene and identifying objects of differing orientations and sizes. Our approach consists of extracting contour features from both images, describing the contour curves as sets of line segments, comparing these sets, determining the corresponding curves and their common reference points, calculating the image-to-image co-ordinate transformation parameters on the basis of the most successful variant of the derived curve relationships. The main steps are performed by custom neural networks. The algorithms describe in this paper have been successfully tested on a large set of images of the same terrain taken in different spectral ranges, at different seasons and rotated by various angles. In general, this experimental verification indicates that the proposed method for image fusion allows the robust detection of similar objects in noisy, distorted scenes where traditional approaches often fail.

  5. AdaBoost-based algorithm for network intrusion detection.

    PubMed

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  6. Corner detection and sorting method based on improved Harris algorithm in camera calibration

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang

    2016-11-01

    In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.

  7. QRS Detection Algorithm for Telehealth Electrocardiogram Recordings.

    PubMed

    Khamis, Heba; Weiss, Robert; Xie, Yang; Chang, Chan-Wei; Lovell, Nigel H; Redmond, Stephen J

    2016-07-01

    QRS detection algorithms are needed to analyze electrocardiogram (ECG) recordings generated in telehealth environments. However, the numerous published QRS detectors focus on clean clinical data. Here, a "UNSW" QRS detection algorithm is described that is suitable for clinical ECG and also poorer quality telehealth ECG. The UNSW algorithm generates a feature signal containing information about ECG amplitude and derivative, which is filtered according to its frequency content and an adaptive threshold is applied. The algorithm was tested on clinical and telehealth ECG and the QRS detection performance is compared to the Pan-Tompkins (PT) and Gutiérrez-Rivas (GR) algorithm. For the MIT-BIH Arrhythmia database (virtually artifact free, clinical ECG), the overall sensitivity (Se) and positive predictivity (+P) of the UNSW algorithm was >99%, which was comparable to PT and GR. When applied to the MIT-BIH noise stress test database (clinical ECG with added calibrated noise) after artifact masking, all three algorithms had overall Se >99%, and the UNSW algorithm had higher +P (98%, p < 0.05) than PT and GR. For 250 telehealth ECG records (unsupervised recordings; dry metal electrodes), the UNSW algorithm had 98% Se and 95% +P which was superior to PT (+P: p < 0.001) and GR (Se and +P: p < 0.001). This is the first study to describe a QRS detection algorithm for telehealth data and evaluate it on clinical and telehealth ECG with superior results to published algorithms. The UNSW algorithm could be used to manage increasing telehealth ECG analysis workloads.

  8. Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.

    PubMed

    Latha, Indu; Reichenbach, Stephen E; Tao, Qingping

    2011-09-23

    Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Bio-ALIRT biosurveillance detection algorithm evaluation.

    PubMed

    Siegrist, David; Pavlin, J

    2004-09-24

    Early detection of disease outbreaks by a medical biosurveillance system relies on two major components: 1) the contribution of early and reliable data sources and 2) the sensitivity, specificity, and timeliness of biosurveillance detection algorithms. This paper describes an effort to assess leading detection algorithms by arranging a common challenge problem and providing a common data set. The objectives of this study were to determine whether automated detection algorithms can reliably and quickly identify the onset of natural disease outbreaks that are surrogates for possible terrorist pathogen releases, and do so at acceptable false-alert rates (e.g., once every 2-6 weeks). Historic de-identified data were obtained from five metropolitan areas over 23 months; these data included International Classification of Diseases, Ninth Revision (ICD-9) codes related to respiratory and gastrointestinal illness syndromes. An outbreak detection group identified and labeled two natural disease outbreaks in these data and provided them to analysts for training of detection algorithms. All outbreaks in the remaining test data were identified but not revealed to the detection groups until after their analyses. The algorithms established a probability of outbreak for each day's counts. The probability of outbreak was assessed as an "actual" alert for different false-alert rates. The best algorithms were able to detect all of the outbreaks at false-alert rates of one every 2-6 weeks. They were often able to detect for the same day human investigators had identified as the true start of the outbreak. Because minimal data exists for an actual biologic attack, determining how quickly an algorithm might detect such an attack is difficult. However, application of these algorithms in combination with other data-analysis methods to historic outbreak data indicates that biosurveillance techniques for analyzing syndrome counts can rapidly detect seasonal respiratory and gastrointestinal illness outbreaks. Further research is needed to assess the value of electronic data sources for predictive detection. In addition, simulations need to be developed and implemented to better characterize the size and type of biologic attack that can be detected by current methods by challenging them under different projected operational conditions.

  10. A lightweight QRS detector for single lead ECG signals using a max-min difference algorithm.

    PubMed

    Pandit, Diptangshu; Zhang, Li; Liu, Chengyu; Chattopadhyay, Samiran; Aslam, Nauman; Lim, Chee Peng

    2017-06-01

    Detection of the R-peak pertaining to the QRS complex of an ECG signal plays an important role for the diagnosis of a patient's heart condition. To accurately identify the QRS locations from the acquired raw ECG signals, we need to handle a number of challenges, which include noise, baseline wander, varying peak amplitudes, and signal abnormality. This research aims to address these challenges by developing an efficient lightweight algorithm for QRS (i.e., R-peak) detection from raw ECG signals. A lightweight real-time sliding window-based Max-Min Difference (MMD) algorithm for QRS detection from Lead II ECG signals is proposed. Targeting to achieve the best trade-off between computational efficiency and detection accuracy, the proposed algorithm consists of five key steps for QRS detection, namely, baseline correction, MMD curve generation, dynamic threshold computation, R-peak detection, and error correction. Five annotated databases from Physionet are used for evaluating the proposed algorithm in R-peak detection. Integrated with a feature extraction technique and a neural network classifier, the proposed ORS detection algorithm has also been extended to undertake normal and abnormal heartbeat detection from ECG signals. The proposed algorithm exhibits a high degree of robustness in QRS detection and achieves an average sensitivity of 99.62% and an average positive predictivity of 99.67%. Its performance compares favorably with those from the existing state-of-the-art models reported in the literature. In regards to normal and abnormal heartbeat detection, the proposed QRS detection algorithm in combination with the feature extraction technique and neural network classifier achieves an overall accuracy rate of 93.44% based on an empirical evaluation using the MIT-BIH Arrhythmia data set with 10-fold cross validation. In comparison with other related studies, the proposed algorithm offers a lightweight adaptive alternative for R-peak detection with good computational efficiency. The empirical results indicate that it not only yields a high accuracy rate in QRS detection, but also exhibits efficient computational complexity at the order of O(n), where n is the length of an ECG signal. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin

    2017-07-01

    Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.

  12. STREAMFINDER - I. A new algorithm for detecting stellar streams

    NASA Astrophysics Data System (ADS)

    Malhan, Khyati; Ibata, Rodrigo A.

    2018-07-01

    We have designed a powerful new algorithm to detect stellar streams in an automated and systematic way. The algorithm, which we call the STREAMFINDER, is well suited for finding dynamically cold and thin stream structures that may lie along any simple or complex orbits in Galactic stellar surveys containing any combination of positional and kinematic information. In the present contribution, we introduce the algorithm, lay out the ideas behind it, explain the methodology adopted to detect streams, and detail its workings by running it on a suite of simulations of mock Galactic survey data of similar quality to that expected from the European Space Agency/Gaia mission. We show that our algorithm is able to detect even ultra-faint stream features lying well below previous detection limits. Tests show that our algorithm will be able to detect distant halo stream structures >10° long containing as few as ˜15 members (ΣG ˜ 33.6 mag arcsec-2) in the Gaia data set.

  13. Distributed learning automata-based algorithm for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Khomami, Mohammad Mehdi Daliri; Rezvanian, Alireza; Meybodi, Mohammad Reza

    2016-03-01

    Community structure is an important and universal topological property of many complex networks such as social and information networks. The detection of communities of a network is a significant technique for understanding the structure and function of networks. In this paper, we propose an algorithm based on distributed learning automata for community detection (DLACD) in complex networks. In the proposed algorithm, each vertex of network is equipped with a learning automation. According to the cooperation among network of learning automata and updating action probabilities of each automaton, the algorithm interactively tries to identify high-density local communities. The performance of the proposed algorithm is investigated through a number of simulations on popular synthetic and real networks. Experimental results in comparison with popular community detection algorithms such as walk trap, Danon greedy optimization, Fuzzy community detection, Multi-resolution community detection and label propagation demonstrated the superiority of DLACD in terms of modularity, NMI, performance, min-max-cut and coverage.

  14. An Improved Harmonic Current Detection Method Based on Parallel Active Power Filter

    NASA Astrophysics Data System (ADS)

    Zeng, Zhiwu; Xie, Yunxiang; Wang, Yingpin; Guan, Yuanpeng; Li, Lanfang; Zhang, Xiaoyu

    2017-05-01

    Harmonic detection technology plays an important role in the applications of active power filter. The accuracy and real-time performance of harmonic detection are the precondition to ensure the compensation performance of Active Power Filter (APF). This paper proposed an improved instantaneous reactive power harmonic current detection algorithm. The algorithm uses an improved ip -iq algorithm which is combined with the moving average value filter. The proposed ip -iq algorithm can remove the αβ and dq coordinate transformation, decreasing the cost of calculation, simplifying the extraction process of fundamental components of load currents, and improving the detection speed. The traditional low-pass filter is replaced by the moving average filter, detecting the harmonic currents more precisely and quickly. Compared with the traditional algorithm, the THD (Total Harmonic Distortion) of the grid currents is reduced from 4.41% to 3.89% for the simulations and from 8.50% to 4.37% for the experiments after the improvement. The results show the proposed algorithm is more accurate and efficient.

  15. Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions

    PubMed Central

    Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Mª; de la Escalera, Arturo

    2010-01-01

    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle. PMID:22163639

  16. A machine learning approach for automated wide-range frequency tagging analysis in embedded neuromonitoring systems.

    PubMed

    Montagna, Fabio; Buiatti, Marco; Benatti, Simone; Rossi, Davide; Farella, Elisabetta; Benini, Luca

    2017-10-01

    EEG is a standard non-invasive technique used in neural disease diagnostics and neurosciences. Frequency-tagging is an increasingly popular experimental paradigm that efficiently tests brain function by measuring EEG responses to periodic stimulation. Recently, frequency-tagging paradigms have proven successful with low stimulation frequencies (0.5-6Hz), but the EEG signal is intrinsically noisy in this frequency range, requiring heavy signal processing and significant human intervention for response estimation. This limits the possibility to process the EEG on resource-constrained systems and to design smart EEG based devices for automated diagnostic. We propose an algorithm for artifact removal and automated detection of frequency tagging responses in a wide range of stimulation frequencies, which we test on a visual stimulation protocol. The algorithm is rooted on machine learning based pattern recognition techniques and it is tailored for a new generation parallel ultra low power processing platform (PULP), reaching performance of more that 90% accuracy in the frequency detection even for very low stimulation frequencies (<1Hz) with a power budget of 56mW. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Identifying and tracking pedestrians based on sensor fusion and motion stability predictions.

    PubMed

    Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Maria; de la Escalera, Arturo

    2010-01-01

    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.

  18. ExScalibur: A High-Performance Cloud-Enabled Suite for Whole Exome Germline and Somatic Mutation Identification.

    PubMed

    Bao, Riyue; Hernandez, Kyle; Huang, Lei; Kang, Wenjun; Bartom, Elizabeth; Onel, Kenan; Volchenboum, Samuel; Andrade, Jorge

    2015-01-01

    Whole exome sequencing has facilitated the discovery of causal genetic variants associated with human diseases at deep coverage and low cost. In particular, the detection of somatic mutations from tumor/normal pairs has provided insights into the cancer genome. Although there is an abundance of publicly-available software for the detection of germline and somatic variants, concordance is generally limited among variant callers and alignment algorithms. Successful integration of variants detected by multiple methods requires in-depth knowledge of the software, access to high-performance computing resources, and advanced programming techniques. We present ExScalibur, a set of fully automated, highly scalable and modulated pipelines for whole exome data analysis. The suite integrates multiple alignment and variant calling algorithms for the accurate detection of germline and somatic mutations with close to 99% sensitivity and specificity. ExScalibur implements streamlined execution of analytical modules, real-time monitoring of pipeline progress, robust handling of errors and intuitive documentation that allows for increased reproducibility and sharing of results and workflows. It runs on local computers, high-performance computing clusters and cloud environments. In addition, we provide a data analysis report utility to facilitate visualization of the results that offers interactive exploration of quality control files, read alignment and variant calls, assisting downstream customization of potential disease-causing mutations. ExScalibur is open-source and is also available as a public image on Amazon cloud.

  19. A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Becker, D.; Cain, S.

    Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.

  20. Effect of Non-rigid Registration Algorithms on Deformation Based Morphometry: A Comparative Study with Control and Williams Syndrome Subjects

    PubMed Central

    Han, Zhaoying; Thornton-Wells, Tricia A.; Dykens, Elisabeth M.; Gore, John C.; Dawant, Benoit M.

    2014-01-01

    Deformation Based Morphometry (DBM) is a widely used method for characterizing anatomical differences across groups. DBM is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to a DBM atlas. Although several studies have compared non-rigid registration algorithms for segmentation tasks, few studies have compared the effect of the registration algorithms on group differences that may be uncovered through DBM. In this study, we compared group atlas creation and DBM results obtained with five well-established non-rigid registration algorithms using thirteen subjects with Williams Syndrome (WS) and thirteen Normal Control (NC) subjects. The five non-rigid registration algorithms include: (1) The Adaptive Bases Algorithm (ABA); (2) The Image Registration Toolkit (IRTK); (3) The FSL Nonlinear Image Registration Tool (FSL); (4) The Automatic Registration Tool (ART); and (5) the normalization algorithm available in SPM8. Results indicate that the choice of algorithm has little effect on the creation of group atlases. However, regions of differences between groups detected with DBM vary from algorithm to algorithm both qualitatively and quantitatively. The unique nature of the data set used in this study also permits comparison of visible anatomical differences between the groups and regions of difference detected by each algorithm. Results show that the interpretation of DBM results is difficult. Four out of the five algorithms we have evaluated detect bilateral differences between the two groups in the insular cortex, the basal ganglia, orbitofrontal cortex, as well as in the cerebellum. These correspond to differences that have been reported in the literature and that are visible in our samples. But our results also show that some algorithms detect regions that are not detected by the others and that the extent of the detected regions varies from algorithm to algorithm. These results suggest that using more than one algorithm when performing DBM studies would increase confidence in the results. Properties of the algorithms such as the similarity measure they maximize and the regularity of the deformation fields, as well as the location of differences detected with DBM, also need to be taken into account in the interpretation process. PMID:22459439

  1. A Region Tracking-Based Vehicle Detection Algorithm in Nighttime Traffic Scenes

    PubMed Central

    Wang, Jianqiang; Sun, Xiaoyan; Guo, Junbin

    2013-01-01

    The preceding vehicles detection technique in nighttime traffic scenes is an important part of the advanced driver assistance system (ADAS). This paper proposes a region tracking-based vehicle detection algorithm via the image processing technique. First, the brightness of the taillights during nighttime is used as the typical feature, and we use the existing global detection algorithm to detect and pair the taillights. When the vehicle is detected, a time series analysis model is introduced to predict vehicle positions and the possible region (PR) of the vehicle in the next frame. Then, the vehicle is only detected in the PR. This could reduce the detection time and avoid the false pairing between the bright spots in the PR and the bright spots out of the PR. Additionally, we present a thresholds updating method to make the thresholds adaptive. Finally, experimental studies are provided to demonstrate the application and substantiate the superiority of the proposed algorithm. The results show that the proposed algorithm can simultaneously reduce both the false negative detection rate and the false positive detection rate.

  2. Nonlinear Classification of AVO Attributes Using SVM

    NASA Astrophysics Data System (ADS)

    Zhao, B.; Zhou, H.

    2005-05-01

    A key research topic in reservoir characterization is the detection of the presence of fluids using seismic and well-log data. In particular, partial gas discrimination is very challenging because low and high gas saturation can result in similar anomalies in terms of Amplitude Variation with Offset (AVO), bright spot, and velocity sag. Hence, a successful fluid detection will require a good understanding of the seismic signatures of the fluids, high-quality data, and good detection methodology. Traditional attempts of partial gas discrimination employ the Neural Network algorithm. A new approach is to use the Support Vector Machine (SVM) (Vapnik, 1995; Liu and Sacchi, 2003). While the potential of the SVM has not been fully explored for reservoir fluid detection, the current nonlinear methods classify seismic attributes without the use of rock physics constraints. The objective of this study is to improve the capability of distinguishing a fizz-water reservoir from a commercial gas reservoir by developing a new detection method using AVO attributes and rock physics constraints. This study will first test the SVM classification with synthetic data, and then apply the algorithm to field data from the King-Kong and Lisa-Anne fields in Gulf of Mexico. While both field areas have high amplitude seismic anomalies, King-Kong field produces commercial gas but Lisa-Anne field does not. We expect that the new SVM-based nonlinear classification of AVO attributes may be able to separate commercial gas from fizz-water in these two fields.

  3. Expert system constant false alarm rate processor

    NASA Astrophysics Data System (ADS)

    Baldygo, William J., Jr.; Wicks, Michael C.

    1993-10-01

    The requirements for high detection probability and low false alarm probability in modern wide area surveillance radars are rarely met due to spatial variations in clutter characteristics. Many filtering and CFAR detection algorithms have been developed to effectively deal with these variations; however, any single algorithm is likely to exhibit excessive false alarms and intolerably low detection probabilities in a dynamically changing environment. A great deal of research has led to advances in the state of the art in Artificial Intelligence (AI) and numerous areas have been identified for application to radar signal processing. The approach suggested here, discussed in a patent application submitted by the authors, is to intelligently select the filtering and CFAR detection algorithms being executed at any given time, based upon the observed characteristics of the interference environment. This approach requires sensing the environment, employing the most suitable algorithms, and applying an appropriate multiple algorithm fusion scheme or consensus algorithm to produce a global detection decision.

  4. Toward an Objective Enhanced-V Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Moses, John F.; Brunner,Jason C.; Feltz, Wayne F.; Ackerman, Steven A.; Moses, John F.; Rabin, Robert M.

    2007-01-01

    The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V signature, has been observed to occur during and preceding severe weather. This study describes an algorithmic approach to objectively detect overshooting tops, temperature couplets, and enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of temperature, temperature difference, and distance thresholds for the overshooting top and temperature couplet detection parts of the algorithm and consists of cross correlation statistics of pixels for the enhanced-V detection part of the algorithm. The effectiveness of the overshooting top and temperature couplet detection components of the algorithm is examined using GOES and MODIS image data for case studies in the 2003-2006 seasons. The main goal is for the algorithm to be useful for operations with future sensors, such as GOES-R.

  5. Evaluating the utility of syndromic surveillance algorithms for screening to detect potentially clonal hospital infection outbreaks

    PubMed Central

    Talbot, Thomas R; Schaffner, William; Bloch, Karen C; Daniels, Titus L; Miller, Randolph A

    2011-01-01

    Objective The authors evaluated algorithms commonly used in syndromic surveillance for use as screening tools to detect potentially clonal outbreaks for review by infection control practitioners. Design Study phase 1 applied four aberrancy detection algorithms (CUSUM, EWMA, space-time scan statistic, and WSARE) to retrospective microbiologic culture data, producing a list of past candidate outbreak clusters. In phase 2, four infectious disease physicians categorized the phase 1 algorithm-identified clusters to ascertain algorithm performance. In phase 3, project members combined the algorithms to create a unified screening system and conducted a retrospective pilot evaluation. Measurements The study calculated recall and precision for each algorithm, and created precision-recall curves for various methods of combining the algorithms into a unified screening tool. Results Individual algorithm recall and precision ranged from 0.21 to 0.31 and from 0.053 to 0.29, respectively. Few candidate outbreak clusters were identified by more than one algorithm. The best method of combining the algorithms yielded an area under the precision-recall curve of 0.553. The phase 3 combined system detected all infection control-confirmed outbreaks during the retrospective evaluation period. Limitations Lack of phase 2 reviewers' agreement indicates that subjective expert review was an imperfect gold standard. Less conservative filtering of culture results and alternate parameter selection for each algorithm might have improved algorithm performance. Conclusion Hospital outbreak detection presents different challenges than traditional syndromic surveillance. Nevertheless, algorithms developed for syndromic surveillance have potential to form the basis of a combined system that might perform clinically useful hospital outbreak screening. PMID:21606134

  6. Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data

    PubMed Central

    Clark, Darin P.; Badea, Cristian T.

    2014-01-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173

  7. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.

    PubMed

    Clark, Darin P; Badea, Cristian T

    2014-11-07

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.

  8. Pandemic preparedness in Hawaii: a multicenter verification of real-time RT-PCR for the direct detection of influenza virus types A and B.

    PubMed

    Whelen, A Christian; Bankowski, Matthew J; Furuya, Glenn; Honda, Stacey; Ueki, Robert; Chan, Amelia; Higa, Karen; Kumashiro, Diane; Moore, Nathaniel; Lee, Roland; Koyamatsu, Terrie; Effler, Paul V

    2010-01-01

    We integrated multicenter, real-time (RTi) reverse transcription polymerase chain reaction (RT-PCR) screening into a statewide laboratory algorithm for influenza surveillance and response. Each of three sites developed its own testing strategy and was challenged with one randomized and blinded panel of 50 specimens previously tested for respiratory viruses. Following testing, each participating laboratory reported its results to the Hawaii State Department of Health, State Laboratories Division for evaluation and possible discrepant analysis. Two of three laboratories reported a 100% sensitivity and specificity, resulting in a 100% positive predictive value and a 100% negative predictive value (NPV) for influenza type A. The third laboratory showed a 71% sensitivity for influenza type A (83% NPV) with 100% specificity. All three laboratories were 100% sensitive and specific for the detection of influenza type B. Discrepant analysis indicated that the lack of sensitivity experienced by the third laboratory may have been due to the analyte-specific reagent probe used by that laboratory. Use of a newer version of the product with a secondary panel of 20 specimens resulted in a sensitivity and specificity of 100%. All three laboratories successfully verified their ability to conduct clinical testing for influenza using diverse nucleic acid extraction and RTi RT-PCR platforms. Successful completion of the verification by all collaborating laboratories paved the way for the integration of those facilities into a statewide laboratory algorithm for influenza surveillance and response.

  9. Evaluation of focused ultrasound algorithms: Issues for reducing pre-focal heating and treatment time.

    PubMed

    Yiannakou, Marinos; Trimikliniotis, Michael; Yiallouras, Christos; Damianou, Christakis

    2016-02-01

    Due to the heating in the pre-focal field the delay between successive movements in high intensity focused ultrasound (HIFU) are sometimes as long as 60s, resulting to treatment time in the order of 2-3h. Because there is generally a requirement to reduce treatment time, we were motivated to explore alternative transducer motion algorithms in order to reduce pre-focal heating and treatment time. A 1 MHz single element transducer with 4 cm diameter and 10 cm focal length was used. A simulation model was developed that estimates the temperature, thermal dose and lesion development in the pre-focal field. The simulated temperature history that was combined with the motion algorithms produced thermal maps in the pre-focal region. Polyacrylimde gel phantom was used to evaluate the induced pre-focal heating for each motion algorithm used, and also was used to assess the accuracy of the simulation model. Three out of the six algorithms having successive steps close to each other, exhibited severe heating in the pre-focal field. Minimal heating was produced with the algorithms having successive steps apart from each other (square, square spiral and random). The last three algorithms were improved further (with small cost in time), thus eliminating completely the pre-focal heating and reducing substantially the treatment time as compared to traditional algorithms. Out of the six algorithms, 3 were successful in eliminating the pre-focal heating completely. Because these 3 algorithms required no delay between successive movements (except in the last part of the motion), the treatment time was reduced by 93%. Therefore, it will be possible in the future, to achieve treatment time of focused ultrasound therapies shorter than 30 min. The rate of ablated volume achieved with one of the proposed algorithms was 71 cm(3)/h. The intention of this pilot study was to demonstrate that the navigation algorithms play the most important role in reducing pre-focal heating. By evaluating in the future, all commercially available geometries, it will be possible to reduce the treatment time, for thermal ablation protocols intended for oncological targets. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  11. Long-term evaluation of three satellite ocean color algorithms for identifying harmful algal blooms (Karenia brevis) along the west coast of Florida: A matchup assessment

    PubMed Central

    Carvalho, Gustavo A.; Minnett, Peter J.; Banzon, Viva F.; Baringer, Warner; Heil, Cynthia A.

    2011-01-01

    We present a simple algorithm to identify Karenia brevis blooms in the Gulf of Mexico along the west coast of Florida in satellite imagery. It is based on an empirical analysis of collocated matchups of satellite and in situ measurements. The results of this Empirical Approach is compared to those of a Bio-optical Technique – taken from the published literature – and the Operational Method currently implemented by the NOAA Harmful Algal Bloom Forecasting System for K. brevis blooms. These three algorithms are evaluated using a multi-year MODIS data set (from July, 2002 to October, 2006) and a long-term in situ database. Matchup pairs, consisting of remotely-sensed ocean color parameters and near-coincident field measurements of K. brevis concentration, are used to assess the accuracy of the algorithms. Fair evaluation of the algorithms was only possible in the central west Florida shelf (i.e. between 25.75°N and 28.25°N) during the boreal Summer and Fall months (i.e. July to December) due to the availability of valid cloud-free matchups. Even though the predictive values of the three algorithms are similar, the statistical measure of success in red tide identification (defined as cell counts in excess of 1.5 × 104 cells L−1) varied considerably (sensitivity—Empirical: 86%; Bio-optical: 77%; Operational: 26%), as did their effectiveness in identifying non-bloom cases (specificity—Empirical: 53%; Bio-optical: 65%; Operational: 84%). As the Operational Method had an elevated frequency of false-negative cases (i.e. presented low accuracy in detecting known red tides), and because of the considerable overlap between the optical characteristics of the red tide and non-bloom population, only the other two algorithms underwent a procedure for further inspecting possible detection improvements. Both optimized versions of the Empirical and Bio-optical algorithms performed similarly, being equally specific and sensitive (~70% for both) and showing low levels of uncertainties (i.e. few cases of false-negatives and false-positives: ~30%)—improved positive predictive values (~60%) were also observed along with good negative predictive values (~80%). PMID:22180667

  12. One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms.

    PubMed

    Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus

    2017-04-01

    Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.

  13. Feasibility studies on explosive detection and homeland security applications using a neutron and x-ray combined computed tomography system

    NASA Astrophysics Data System (ADS)

    Sinha, V.; Srivastava, A.; Lee, H. K.; Liu, X.

    2013-05-01

    The successful creation and operation of a neutron and X-ray combined computed tomography (NXCT) system has been demonstrated by researchers at the Missouri University of Science and Technology. The NXCT system has numerous applications in the field of material characterization and object identification in materials with a mixture of atomic numbers represented. Presently, the feasibility studies have been performed for explosive detection and homeland security applications, particularly in concealed material detection and determination of the light atomic number materials. These materials cannot be detected using traditional X-ray imaging. The new system has the capability to provide complete structural and compositional information due to the complementary nature of X-ray and neutron interactions with materials. The design of the NXCT system facilitates simultaneous and instantaneous imaging operation, promising enhanced detection capabilities of explosive materials, low atomic number materials and illicit materials for homeland security applications. In addition, a sample positioning system allowing the user to remotely and automatically manipulate the sample makes the system viable for commercial applications. Several explosives and weapon simulants have been imaged and the results are provided. The fusion algorithms which combine the data from the neutron and X-ray imaging produce superior images. This paper is a compete overview of the NXCT system for feasibility studies of explosive detection and homeland security applications. The design of the system, operation, algorithm development, and detection schemes are provided. This is the first combined neutron and X-ray computed tomography system in operation. Furthermore, the method of fusing neutron and X-ray images together is a new approach which provides high contrast images of the desired object. The system could serve as a standardized tool in nondestructive testing of many applications, especially in explosives detection and homeland security research.

  14. Development and validation of a dual sensing scheme to improve accuracy of bradycardia and pause detection in an insertable cardiac monitor.

    PubMed

    Passman, Rod S; Rogers, John D; Sarkar, Shantanu; Reiland, Jerry; Reisfeld, Erin; Koehler, Jodi; Mittal, Suneet

    2017-07-01

    Undersensing of premature ventricular beats and low-amplitude R waves are primary causes for inappropriate bradycardia and pause detections in insertable cardiac monitors (ICMs). The purpose of this study was to develop and validate an enhanced algorithm to reduce inappropriately detected bradycardia and pause episodes. Independent data sets to develop and validate the enhanced algorithm were derived from a database of ICM-detected bradycardia and pause episodes in de-identified patients monitored for unexplained syncope. The original algorithm uses an auto-adjusting sensitivity threshold for R-wave sensing to detect tachycardia and avoid T-wave oversensing. In the enhanced algorithm, a second sensing threshold is used with a long blanking and fixed lower sensitivity threshold, looking for evidence of undersensed signals. Data reported includes percent change in appropriate and inappropriate bradycardia and pause detections as well as changes in episode detection sensitivity and positive predictive value with the enhanced algorithm. The validation data set, from 663 consecutive patients, consisted of 4904 (161 patients) bradycardia and 2582 (133 patients) pause episodes, of which 2976 (61%) and 996 (39%) were appropriately detected bradycardia and pause episodes. The enhanced algorithm reduced inappropriate bradycardia and pause episodes by 95% and 47%, respectively, with 1.7% and 0.6% reduction in appropriate episodes, respectively. The average episode positive predictive value improved by 62% (P < .001) for bradycardia detection and by 26% (P < .001) for pause detection, with an average relative sensitivity of 95% (P < .001) and 99% (P = .5), respectively. The enhanced dual sense algorithm for bradycardia and pause detection in ICMs substantially reduced inappropriate episode detection with a minimal reduction in true episode detection. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Adapting detection sensitivity based on evidence of irregular sinus arrhythmia to improve atrial fibrillation detection in insertable cardiac monitors.

    PubMed

    Pürerfellner, Helmut; Sanders, Prashanthan; Sarkar, Shantanu; Reisfeld, Erin; Reiland, Jerry; Koehler, Jodi; Pokushalov, Evgeny; Urban, Luboš; Dekker, Lukas R C

    2017-10-03

    Intermittent change in p-wave discernibility during periods of ectopy and sinus arrhythmia is a cause of inappropriate atrial fibrillation (AF) detection in insertable cardiac monitors (ICM). To address this, we developed and validated an enhanced AF detection algorithm. Atrial fibrillation detection in Reveal LINQ ICM uses patterns of incoherence in RR intervals and absence of P-wave evidence over a 2-min period. The enhanced algorithm includes P-wave evidence during RR irregularity as evidence of sinus arrhythmia or ectopy to adaptively optimize sensitivity for AF detection. The algorithm was developed and validated using Holter data from the XPECT and LINQ Usability studies which collected surface electrocardiogram (ECG) and continuous ICM ECG over a 24-48 h period. The algorithm detections were compared with Holter annotations, performed by multiple reviewers, to compute episode and duration detection performance. The validation dataset comprised of 3187 h of valid Holter and LINQ recordings from 138 patients, with true AF in 37 patients yielding 108 true AF episodes ≥2-min and 449 h of AF. The enhanced algorithm reduced inappropriately detected episodes by 49% and duration by 66% with <1% loss in true episodes or duration. The algorithm correctly identified 98.9% of total AF duration and 99.8% of total sinus or non-AF rhythm duration. The algorithm detected 97.2% (99.7% per-patient average) of all AF episodes ≥2-min, and 84.9% (95.3% per-patient average) of detected episodes involved AF. An enhancement that adapts sensitivity for AF detection reduced inappropriately detected episodes and duration with minimal reduction in sensitivity. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Cardiology

  16. Low complexity feature extraction for classification of harmonic signals

    NASA Astrophysics Data System (ADS)

    William, Peter E.

    In this dissertation, feature extraction algorithms have been developed for extraction of characteristic features from harmonic signals. The common theme for all developed algorithms is the simplicity in generating a significant set of features directly from the time domain harmonic signal. The features are a time domain representation of the composite, yet sparse, harmonic signature in the spectral domain. The algorithms are adequate for low-power unattended sensors which perform sensing, feature extraction, and classification in a standalone scenario. The first algorithm generates the characteristic features using only the duration between successive zero-crossing intervals. The second algorithm estimates the harmonics' amplitudes of the harmonic structure employing a simplified least squares method without the need to estimate the true harmonic parameters of the source signal. The third algorithm, resulting from a collaborative effort with Daniel White at the DSP Lab, University of Nebraska-Lincoln, presents an analog front end approach that utilizes a multichannel analog projection and integration to extract the sparse spectral features from the analog time domain signal. Classification is performed using a multilayer feedforward neural network. Evaluation of the proposed feature extraction algorithms for classification through the processing of several acoustic and vibration data sets (including military vehicles and rotating electric machines) with comparison to spectral features shows that, for harmonic signals, time domain features are simpler to extract and provide equivalent or improved reliability over the spectral features in both the detection probabilities and false alarm rate.

  17. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGES

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; ...

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  18. A scale-invariant keypoint detector in log-polar space

    NASA Astrophysics Data System (ADS)

    Tao, Tao; Zhang, Yun

    2017-02-01

    The scale-invariant feature transform (SIFT) algorithm is devised to detect keypoints via the difference of Gaussian (DoG) images. However, the DoG data lacks the high-frequency information, which can lead to a performance drop of the algorithm. To address this issue, this paper proposes a novel log-polar feature detector (LPFD) to detect scale-invariant blubs (keypoints) in log-polar space, which, in contrast, can retain all the image information. The algorithm consists of three components, viz. keypoint detection, descriptor extraction and descriptor matching. Besides, the algorithm is evaluated in detecting keypoints from the INRIA dataset by comparing with the SIFT algorithm and one of its fast versions, the speed up robust features (SURF) algorithm in terms of three performance measures, viz. correspondences, repeatability, correct matches and matching score.

  19. CONEDEP: COnvolutional Neural network based Earthquake DEtection and Phase Picking

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Huang, Y.; Yue, H.; Zhou, S.; An, S.; Yun, N.

    2017-12-01

    We developed an automatic local earthquake detection and phase picking algorithm based on Fully Convolutional Neural network (FCN). The FCN algorithm detects and segments certain features (phases) in 3 component seismograms to realize efficient picking. We use STA/LTA algorithm and template matching algorithm to construct the training set from seismograms recorded 1 month before and after the Wenchuan earthquake. Precise P and S phases are identified and labeled to construct the training set. Noise data are produced by combining back-ground noise and artificial synthetic noise to form the equivalent scale of noise set as the signal set. Training is performed on GPUs to achieve efficient convergence. Our algorithm has significantly improved performance in terms of the detection rate and precision in comparison with STA/LTA and template matching algorithms.

  20. Improvement and implementation for Canny edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  1. Discriminative correlation filter tracking with occlusion detection

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Chen, Zhong; Yu, XiPeng; Zhang, Ting; He, Jing

    2018-03-01

    Aiming at the problem that the correlation filter-based tracking algorithm can not track the target of severe occlusion, a target re-detection mechanism is proposed. First of all, based on the ECO, we propose the multi-peak detection model and the response value to distinguish the occlusion and deformation in the target tracking, which improve the success rate of tracking. And then we add the confidence model to update the mechanism to effectively prevent the model offset problem which due to similar targets or background during the tracking process. Finally, the redetection mechanism of the target is added, and the relocation is performed after the target is lost, which increases the accuracy of the target positioning. The experimental results demonstrate that the proposed tracker performs favorably against state-of-the-art methods in terms of robustness and accuracy.

  2. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network

    PubMed Central

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish–Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection. PMID:26447696

  3. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network.

    PubMed

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish-Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection.

  4. Cognitive foundations for model-based sensor fusion

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Weijers, Bertus; Mutz, Chris W.

    2003-08-01

    Target detection, tracking, and sensor fusion are complicated problems, which usually are performed sequentially. First detecting targets, then tracking, then fusing multiple sensors reduces computations. This procedure however is inapplicable to difficult targets which cannot be reliably detected using individual sensors, on individual scans or frames. In such more complicated cases one has to perform functions of fusing, tracking, and detecting concurrently. This often has led to prohibitive combinatorial complexity and, as a consequence, to sub-optimal performance as compared to the information-theoretic content of all the available data. It is well appreciated that in this task the human mind is by far superior qualitatively to existing mathematical methods of sensor fusion, however, the human mind is limited in the amount of information and speed of computation it can cope with. Therefore, research efforts have been devoted toward incorporating "biological lessons" into smart algorithms, yet success has been limited. Why is this so, and how to overcome existing limitations? The fundamental reasons for current limitations are analyzed and a potentially breakthrough research and development effort is outlined. We utilize the way our mind combines emotions and concepts in the thinking process and present the mathematical approach to accomplishing this in the current technology computers. The presentation will summarize the difficulties encountered by intelligent systems over the last 50 years related to combinatorial complexity, analyze the fundamental limitations of existing algorithms and neural networks, and relate it to the type of logic underlying the computational structure: formal, multivalued, and fuzzy logic. A new concept of dynamic logic will be introduced along with algorithms capable of pulling together all the available information from multiple sources. This new mathematical technique, like our brain, combines conceptual understanding with emotional evaluation and overcomes the combinatorial complexity of concurrent fusion, tracking, and detection. The presentation will discuss examples of performance, where computational speedups of many orders of magnitude were attained leading to performance improvements of up to 10 dB (and better).

  5. Testing the Rapid Detection Capabilities of the Quake-Catcher Network

    NASA Astrophysics Data System (ADS)

    Chung, A. I.; Cochran, E.; Yildirim, B.; Christensen, C. M.; Kaiser, A. E.; Lawrence, J. F.

    2013-12-01

    The Quake-Catcher Network (QCN) is a versatile network of MEMS accelerometers that are used in combination with distributed volunteer computing to detect earthquakes around the world. Using a dense network of QCN stations installed in Christchurch, New Zealand after the 2010 M7.1 Darfield earthquake, hundreds of events in the Christchurch area were detected and rapidly characterized. When the M6.3 Christchurch event occurred on 21 February 2011, QCN sensors recorded the event and calculated its magnitude, location, and created a map of estimated shaking intensity within 7 seconds of the earthquake origin time. Successive iterations improved the calculations and, within 24 seconds of the earthquake, magnitude and location values were calculated that were comparable to those provided by GeoNet. We have rigorously tested numerous methods to create a working magnitude scaling relationship. In this presentation, we show a drastic improvement in the magnitude estimates using the maximum acceleration at the time of the first trigger and updated ground accelerations from one to three seconds after the initial trigger. 75% of the events rapidly detected and characterized by QCN are within 0.5 magnitude units of the official GeoNet reported magnitude values, with 95% of the events within 1 magnitude unit. We also test the QCN detection algorithms using higher quality data from the SCSN network in Southern California. We examine a dataset of M5 and larger earthquakes that occurred since 1995. We present the performance of the QCN algorithms for this dataset, including time to detection as well as location and magnitude accuracy.

  6. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    NASA Astrophysics Data System (ADS)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  7. Automatic target detection using binary template matching

    NASA Astrophysics Data System (ADS)

    Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook

    2005-03-01

    This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.

  8. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  9. Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Zheng, Yang; Chen, Xihao; Zhu, Rui

    2017-07-01

    Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.

  10. Robust automatic line scratch detection in films.

    PubMed

    Newson, Alasdair; Almansa, Andrés; Gousseau, Yann; Pérez, Patrick

    2014-03-01

    Line scratch detection in old films is a particularly challenging problem due to the variable spatiotemporal characteristics of this defect. Some of the main problems include sensitivity to noise and texture, and false detections due to thin vertical structures belonging to the scene. We propose a robust and automatic algorithm for frame-by-frame line scratch detection in old films, as well as a temporal algorithm for the filtering of false detections. In the frame-by-frame algorithm, we relax some of the hypotheses used in previous algorithms in order to detect a wider variety of scratches. This step's robustness and lack of external parameters is ensured by the combined use of an a contrario methodology and local statistical estimation. In this manner, over-detection in textured or cluttered areas is greatly reduced. The temporal filtering algorithm eliminates false detections due to thin vertical structures by exploiting the coherence of their motion with that of the underlying scene. Experiments demonstrate the ability of the resulting detection procedure to deal with difficult situations, in particular in the presence of noise, texture, and slanted or partial scratches. Comparisons show significant advantages over previous work.

  11. Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.

    PubMed

    Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J

    2017-01-01

    House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.

  12. CRISPR Detection From Short Reads Using Partial Overlap Graphs.

    PubMed

    Ben-Bassat, Ilan; Chor, Benny

    2016-06-01

    Clustered regularly interspaced short palindromic repeats (CRISPR) are structured regions in bacterial and archaeal genomes, which are part of an adaptive immune system against phages. CRISPRs are important for many microbial studies and are playing an essential role in current gene editing techniques. As such, they attract substantial research interest. The exponential growth in the amount of bacterial sequence data in recent years enables the exploration of CRISPR loci in more and more species. Most of the automated tools that detect CRISPR loci rely on fully assembled genomes. However, many assemblers do not handle repetitive regions successfully. The first tool to work directly on raw sequence data is Crass, which requires reads that are long enough to contain two copies of the same repeat. We present a method to identify CRISPR repeats from raw sequence data of short reads. The algorithm is based on an observation differentiating CRISPR repeats from other types of repeats, and it involves a series of partial constructions of the overlap graph. This enables us to avoid many of the difficulties that assemblers face, as we merely aim to identify the repeats that belong to CRISPR loci. A preliminary implementation of the algorithm shows good results and detects CRISPR repeats in cases where other existing tools fail to do so.

  13. Bridging the semantic gap in sports

    NASA Astrophysics Data System (ADS)

    Li, Baoxin; Errico, James; Pan, Hao; Sezan, M. Ibrahim

    2003-01-01

    One of the major challenges facing current media management systems and the related applications is the so-called "semantic gap" between the rich meaning that a user desires and the shallowness of the content descriptions that are automatically extracted from the media. In this paper, we address the problem of bridging this gap in the sports domain. We propose a general framework for indexing and summarizing sports broadcast programs. The framework is based on a high-level model of sports broadcast video using the concept of an event, defined according to domain-specific knowledge for different types of sports. Within this general framework, we develop automatic event detection algorithms that are based on automatic analysis of the visual and aural signals in the media. We have successfully applied the event detection algorithms to different types of sports including American football, baseball, Japanese sumo wrestling, and soccer. Event modeling and detection contribute to the reduction of the semantic gap by providing rudimentary semantic information obtained through media analysis. We further propose a novel approach, which makes use of independently generated rich textual metadata, to fill the gap completely through synchronization of the information-laden textual data with the basic event segments. An MPEG-7 compliant prototype browsing system has been implemented to demonstrate semantic retrieval and summarization of sports video.

  14. Use of a channelized Hotelling observer to assess CT image quality and optimize dose reduction for iteratively reconstructed images.

    PubMed

    Favazza, Christopher P; Ferrero, Andrea; Yu, Lifeng; Leng, Shuai; McMillan, Kyle L; McCollough, Cynthia H

    2017-07-01

    The use of iterative reconstruction (IR) algorithms in CT generally decreases image noise and enables dose reduction. However, the amount of dose reduction possible using IR without sacrificing diagnostic performance is difficult to assess with conventional image quality metrics. Through this investigation, achievable dose reduction using a commercially available IR algorithm without loss of low contrast spatial resolution was determined with a channelized Hotelling observer (CHO) model and used to optimize a clinical abdomen/pelvis exam protocol. A phantom containing 21 low contrast disks-three different contrast levels and seven different diameters-was imaged at different dose levels. Images were created with filtered backprojection (FBP) and IR. The CHO was tasked with detecting the low contrast disks. CHO performance indicated dose could be reduced by 22% to 25% without compromising low contrast detectability (as compared to full-dose FBP images) whereas 50% or more dose reduction significantly reduced detection performance. Importantly, default settings for the scanner and protocol investigated reduced dose by upward of 75%. Subsequently, CHO-based protocol changes to the default protocol yielded images of higher quality and doses more consistent with values from a larger, dose-optimized scanner fleet. CHO assessment provided objective data to successfully optimize a clinical CT acquisition protocol.

  15. Image based book cover recognition and retrieval

    NASA Astrophysics Data System (ADS)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  16. Hazardous gas detection for FTIR-based hyperspectral imaging system using DNN and CNN

    NASA Astrophysics Data System (ADS)

    Kim, Yong Chan; Yu, Hyeong-Geun; Lee, Jae-Hoon; Park, Dong-Jo; Nam, Hyun-Woo

    2017-10-01

    Recently, a hyperspectral imaging system (HIS) with a Fourier Transform InfraRed (FTIR) spectrometer has been widely used due to its strengths in detecting gaseous fumes. Even though numerous algorithms for detecting gaseous fumes have already been studied, it is still difficult to detect target gases properly because of atmospheric interference substances and unclear characteristics of low concentration gases. In this paper, we propose detection algorithms for classifying hazardous gases using a deep neural network (DNN) and a convolutional neural network (CNN). In both the DNN and CNN, spectral signal preprocessing, e.g., offset, noise, and baseline removal, are carried out. In the DNN algorithm, the preprocessed spectral signals are used as feature maps of the DNN with five layers, and it is trained by a stochastic gradient descent (SGD) algorithm (50 batch size) and dropout regularization (0.7 ratio). In the CNN algorithm, preprocessed spectral signals are trained with 1 × 3 convolution layers and 1 × 2 max-pooling layers. As a result, the proposed algorithms improve the classification accuracy rate by 1.5% over the existing support vector machine (SVM) algorithm for detecting and classifying hazardous gases.

  17. A Motion Detection Algorithm Using Local Phase Information

    PubMed Central

    Lazar, Aurel A.; Ukani, Nikul H.; Zhou, Yiyin

    2016-01-01

    Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882

  18. Detection of dechallenge in spontaneous reporting systems: a comparison of Bayes methods.

    PubMed

    Banu, A Bazila; Alias Balamurugan, S Appavu; Thirumalaikolundusubramanian, Ponniah

    2014-01-01

    Dechallenge is a response observed for the reduction or disappearance of adverse drug reactions (ADR) on withdrawal of a drug from a patient. Currently available algorithms to detect dechallenge have limitations. Hence, there is a need to compare available new methods. To detect dechallenge in Spontaneous Reporting Systems, data-mining algorithms like Naive Bayes and Improved Naive Bayes were applied for comparing the performance of the algorithms in terms of accuracy and error. Analyzing the factors of dechallenge like outcome and disease category will help medical practitioners and pharmaceutical industries to determine the reasons for dechallenge in order to take essential steps toward drug safety. Adverse drug reactions of the year 2011 and 2012 were downloaded from the United States Food and Drug Administration's database. The outcome of classification algorithms showed that Improved Naive Bayes algorithm outperformed Naive Bayes with accuracy of 90.11% and error of 9.8% in detecting the dechallenge. Detecting dechallenge for unknown samples are essential for proper prescription. To overcome the issues exposed by Naive Bayes algorithm, Improved Naive Bayes algorithm can be used to detect dechallenge in terms of higher accuracy and minimal error.

  19. Crystal Identification in Dual-Layer-Offset DOI-PET Detectors Using Stratified Peak Tracking Based on SVD and Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Wei, Qingyang; Dai, Tiantian; Ma, Tianyu; Liu, Yaqiang; Gu, Yu

    2016-10-01

    An Anger-logic based pixelated PET detector block requires a crystal position map (CPM) to assign the position of each detected event to a most probable crystal index. Accurate assignments are crucial to PET imaging performance. In this paper, we present a novel automatic approach to generate the CPMs for dual-layer offset (DLO) PET detectors using a stratified peak tracking method. In which, the top and bottom layers are distinguished by their intensity difference and the peaks of the top and bottom layers are tracked based on a singular value decomposition (SVD) and mean-shift algorithm in succession. The CPM is created by classifying each pixel to its nearest peak and assigning the pixel with the crystal index of that peak. A Matlab-based graphical user interface program was developed including the automatic algorithm and a manual interaction procedure. The algorithm was tested for three DLO PET detector blocks. Results show that the proposed method exhibits good performance as well as robustness for all the three blocks. Compared to the existing methods, our approach can directly distinguish the layer and crystal indices using the information of intensity and offset grid pattern.

  20. Uplink transmit beamforming design for SINR maximization with full multiuser channel state information

    NASA Astrophysics Data System (ADS)

    Xi, Songnan; Zoltowski, Michael D.

    2008-04-01

    Multiuser multiple-input multiple-output (MIMO) systems are considered in this paper. We continue our research on uplink transmit beamforming design for multiple users under the assumption that the full multiuser channel state information, which is the collection of the channel state information between each of the users and the base station, is known not only to the receiver but also to all the transmitters. We propose an algorithm for designing optimal beamforming weights in terms of maximizing the signal-to-interference-plus-noise ratio (SINR). Through statistical modeling, we decouple the original mathematically intractable optimization problem and achieved a closed-form solution. As in our previous work, the minimum mean-squared error (MMSE) receiver with successive interference cancellation (SIC) is adopted for multiuser detection. The proposed scheme is compared with an existing jointly optimized transceiver design, referred to as the joint transceiver in this paper, and our previously proposed eigen-beamforming algorithm. Simulation results demonstrate that our algorithm, with much less computational burden, accomplishes almost the same performance as the joint transceiver for spatially independent MIMO channel and even better performance for spatially correlated MIMO channels. And it always works better than our previously proposed eigen beamforming algorithm.

  1. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    NASA Astrophysics Data System (ADS)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  2. Research on improved edge extraction algorithm of rectangular piece

    NASA Astrophysics Data System (ADS)

    He, Yi-Bin; Zeng, Ya-Jun; Chen, Han-Xin; Xiao, San-Xia; Wang, Yan-Wei; Huang, Si-Yu

    Traditional edge detection operators such as Prewitt operator, LOG operator and Canny operator, etc. cannot meet the requirements of the modern industrial measurement. This paper proposes a kind of image edge detection algorithm based on improved morphological gradient. It can be detect the image using structural elements, which deals with the characteristic information of the image directly. Choosing different shapes and sizes of structural elements to use together, the ideal image edge information can be detected. The experimental result shows that the algorithm can well extract image edge with noise, which is clearer, and has more detailed edges compared with the previous edge detection algorithm.

  3. Remote Sensing of Mars: Detection of Impact Craters on the Mars Global Surveyor DTM by Integrating Edge- and Region-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Athanassas, C. D.; Vaiopoulos, A.; Kolokoussis, P.; Argialas, D.

    2018-03-01

    This study integrates two different computer vision approaches, namely the circular Hough transform (CHT) and the determinant of Hessian (DoH), to detect automatically the largest number possible of craters of any size on the digital terrain model (DTM) generated by the Mars Global Surveyor mission. Specifically, application of the standard version of CHT to the DTM captured a great number of craters with diameter smaller than 50 km only, failing to capture larger craters. On the other hand, DoH was successful in detecting craters that were undetected by CHT, but its performance was deterred by the irregularity of the topographic surface encompassed: strongly undulated and inclined (trended) topographies hindered crater detection. When run on a de-trended DTM (and keeping the topology unaltered) DoH scored higher. Current results, although not optimal, encourage combined use of CHT and DoH for routine crater detection undertakings.

  4. Computer-assisted detection (CAD) methodology for early detection of response to pharmaceutical therapy in tuberculosis patients

    NASA Astrophysics Data System (ADS)

    Lieberman, Robert; Kwong, Heston; Liu, Brent; Huang, H. K.

    2009-02-01

    The chest x-ray radiological features of tuberculosis patients are well documented, and the radiological features that change in response to successful pharmaceutical therapy can be followed with longitudinal studies over time. The patients can also be classified as either responsive or resistant to pharmaceutical therapy based on clinical improvement. We have retrospectively collected time series chest x-ray images of 200 patients diagnosed with tuberculosis receiving the standard pharmaceutical treatment. Computer algorithms can be created to utilize image texture features to assess the temporal changes in the chest x-rays of the tuberculosis patients. This methodology provides a framework for a computer-assisted detection (CAD) system that may provide physicians with the ability to detect poor treatment response earlier in pharmaceutical therapy. Early detection allows physicians to respond with more timely treatment alternatives and improved outcomes. Such a system has the potential to increase treatment efficacy for millions of patients each year.

  5. Digital holographic microscopy for detection of Trypanosoma cruzi parasites in fresh blood mounts

    NASA Astrophysics Data System (ADS)

    Romero, G. G.; Monaldi, A. C.; Alanís, E. E.

    2012-03-01

    An off-axis holographic microscope, in a transmission mode, calibrated to automatically detect the presence of Trypanosoma cruzi in blood is developed as an alternative diagnosis tool for Chagas disease. Movements of the microorganisms are detected by measuring the phase shift they produce on the transmitted wave front. A thin layer of blood infected by Trypanosoma cruzi parasites is examined in the holographic microscope, the images of the visual field being registered with a CCD camera. Two consecutive holograms of the same visual field are subtracted point by point and a phase contrast image of the resulting hologram is reconstructed by means of the angular spectrum propagation algorithm. This method enables the measurement of phase distributions corresponding to temporal differences between digital holograms in order to detect whether parasites are present or not. Experimental results obtained using this technique show that it is an efficient alternative that can be incorporated successfully as a part of a fully automatic system for detection and counting of this type of microorganisms.

  6. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  7. Prostate Brachytherapy Seed Reconstruction with Gaussian Blurring and Optimal Coverage Cost

    PubMed Central

    Lee, Junghoon; Liu, Xiaofeng; Jain, Ameet K.; Song, Danny Y.; Burdette, E. Clif; Prince, Jerry L.; Fichtinger, Gabor

    2009-01-01

    Intraoperative dosimetry in prostate brachytherapy requires localization of the implanted radioactive seeds. A tomosynthesis-based seed reconstruction method is proposed. A three-dimensional volume is reconstructed from Gaussian-blurred projection images and candidate seed locations are computed from the reconstructed volume. A false positive seed removal process, formulated as an optimal coverage problem, iteratively removes “ghost” seeds that are created by tomosynthesis reconstruction. In an effort to minimize pose errors that are common in conventional C-arms, initial pose parameter estimates are iteratively corrected by using the detected candidate seeds as fiducials, which automatically “focuses” the collected images and improves successive reconstructed volumes. Simulation results imply that the implanted seed locations can be estimated with a detection rate of ≥ 97.9% and ≥ 99.3% from three and four images, respectively, when the C-arm is calibrated and the pose of the C-arm is known. The algorithm was also validated on phantom data sets successfully localizing the implanted seeds from four or five images. In a Phase-1 clinical trial, we were able to localize the implanted seeds from five intraoperative fluoroscopy images with 98.8% (STD=1.6) overall detection rate. PMID:19605321

  8. An Automated Energy Detection Algorithm Based on Consecutive Mean Excision

    DTIC Science & Technology

    2018-01-01

    present in the RF spectrum. 15. SUBJECT TERMS RF spectrum, detection threshold algorithm, consecutive mean excision, rank order filter , statistical...Median 4 3.1.9 Rank Order Filter (ROF) 4 3.1.10 Crest Factor (CF) 5 3.2 Statistical Summary 6 4. Algorithm 7 5. Conclusion 8 6. References 9...energy detection algorithm based on morphological filter processing with a semi- disk structure. Adelphi (MD): Army Research Laboratory (US); 2018 Jan

  9. A Methodology for Determining Statistical Performance Compliance for Airborne Doppler Radar with Forward-Looking Turbulence Detection Capability

    NASA Technical Reports Server (NTRS)

    Bowles, Roland L.; Buck, Bill K.

    2009-01-01

    The objective of the research developed and presented in this document was to statistically assess turbulence hazard detection performance employing airborne pulse Doppler radar systems. The FAA certification methodology for forward looking airborne turbulence radars will require estimating the probabilities of missed and false hazard indications under operational conditions. Analytical approaches must be used due to the near impossibility of obtaining sufficient statistics experimentally. This report describes an end-to-end analytical technique for estimating these probabilities for Enhanced Turbulence (E-Turb) Radar systems under noise-limited conditions, for a variety of aircraft types, as defined in FAA TSO-C134. This technique provides for one means, but not the only means, by which an applicant can demonstrate compliance to the FAA directed ATDS Working Group performance requirements. Turbulence hazard algorithms were developed that derived predictive estimates of aircraft hazards from basic radar observables. These algorithms were designed to prevent false turbulence indications while accurately predicting areas of elevated turbulence risks to aircraft, passengers, and crew; and were successfully flight tested on a NASA B757-200 and a Delta Air Lines B737-800. Application of this defined methodology for calculating the probability of missed and false hazard indications taking into account the effect of the various algorithms used, is demonstrated for representative transport aircraft and radar performance characteristics.

  10. Image Analysis Algorithms for Immunohistochemical Assessment of Cell Death Events and Fibrosis in Tissue Sections

    PubMed Central

    Krajewska, Maryla; Smith, Layton H.; Rong, Juan; Huang, Xianshu; Hyer, Marc L.; Zeps, Nikolajs; Iacopetta, Barry; Linke, Steven P.; Olson, Allen H.; Reed, John C.; Krajewski, Stan

    2009-01-01

    Cell death is of broad physiological and pathological importance, making quantification of biochemical events associated with cell demise a high priority for experimental pathology. Fibrosis is a common consequence of tissue injury involving necrotic cell death. Using tissue specimens from experimental mouse models of traumatic brain injury, cardiac fibrosis, and cancer, as well as human tumor specimens assembled in tissue microarray (TMA) format, we undertook computer-assisted quantification of specific immunohistochemical and histological parameters that characterize processes associated with cell death. In this study, we demonstrated the utility of image analysis algorithms for color deconvolution, colocalization, and nuclear morphometry to characterize cell death events in tissue specimens: (a) subjected to immunostaining for detecting cleaved caspase-3, cleaved poly(ADP-ribose)-polymerase, cleaved lamin-A, phosphorylated histone H2AX, and Bcl-2; (b) analyzed by terminal deoxyribonucleotidyl transferase–mediated dUTP nick end labeling assay to detect DNA fragmentation; and (c) evaluated with Masson's trichrome staining. We developed novel algorithm-based scoring methods and validated them using TMAs as a high-throughput format. The proposed computer-assisted scoring methods for digital images by brightfield microscopy permit linear quantification of immunohistochemical and histochemical stainings. Examples are provided of digital image analysis performed in automated or semiautomated fashion for successful quantification of molecular events associated with cell death in tissue sections. (J Histochem Cytochem 57:649–663, 2009) PMID:19289554

  11. A semianalytical MERIS green-red band algorithm for identifying phytoplankton bloom types in the East China Sea

    NASA Astrophysics Data System (ADS)

    Tao, Bangyi; Mao, Zhihua; Lei, Hui; Pan, Delu; Bai, Yan; Zhu, Qiankun; Zhang, Zhenglong

    2017-03-01

    A new bio-optical algorithm based on the green and red bands of the Medium Resolution Imaging Spectrometer (MERIS) is developed to differentiate the harmful algal blooms of Prorocentrum donghaiense Lu (P. donghaiense) from diatom blooms in the East China Sea (ECS). Specifically, a novel green-red index (GRI), actually an indicator for a(510) of bloom waters, is retrieved from a semianalytical bio-optical model based on the green and red bands of phytoplankton-absorption and backscattering spectra. In addition, a MERIS-based diatom index (DIMERIS) is derived by adjusting a Moderate Resolution Imaging Spectroradiometer (MODIS) diatom index algorithm to the MERIS bands. Finally, bloom types are effectively differentiated in the feature spaces of the green-red index and DIMERIS. Compared with three previous MERIS-based quasi-analytical algorithm (QAA) algorithms and three existing classification methods, the proposed GRI and classification method have the best discrimination performance when using the MERIS data. Further validations of the algorithm by using several MERIS image series and near-concurrent in situ observations indicate that our algorithm yields the best classification accuracy and thus can be used to reliably detect and classify P. donghaiense and diatom blooms in the ECS. This is the first time that the MERIS data have been used to identify bloom types in the ECS. Our algorithm can also be used for the successor of the MERIS, the Ocean and Land Color Instrument, which will aid the long-term observation of species succession in the ECS.

  12. Phenotyping for patient safety: algorithm development for electronic health record based automated adverse event and medical error detection in neonatal intensive care.

    PubMed

    Li, Qi; Melton, Kristin; Lingren, Todd; Kirkendall, Eric S; Hall, Eric; Zhai, Haijun; Ni, Yizhao; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre

    2014-01-01

    Although electronic health records (EHRs) have the potential to provide a foundation for quality and safety algorithms, few studies have measured their impact on automated adverse event (AE) and medical error (ME) detection within the neonatal intensive care unit (NICU) environment. This paper presents two phenotyping AE and ME detection algorithms (ie, IV infiltrations, narcotic medication oversedation and dosing errors) and describes manual annotation of airway management and medication/fluid AEs from NICU EHRs. From 753 NICU patient EHRs from 2011, we developed two automatic AE/ME detection algorithms, and manually annotated 11 classes of AEs in 3263 clinical notes. Performance of the automatic AE/ME detection algorithms was compared to trigger tool and voluntary incident reporting results. AEs in clinical notes were double annotated and consensus achieved under neonatologist supervision. Sensitivity, positive predictive value (PPV), and specificity are reported. Twelve severe IV infiltrates were detected. The algorithm identified one more infiltrate than the trigger tool and eight more than incident reporting. One narcotic oversedation was detected demonstrating 100% agreement with the trigger tool. Additionally, 17 narcotic medication MEs were detected, an increase of 16 cases over voluntary incident reporting. Automated AE/ME detection algorithms provide higher sensitivity and PPV than currently used trigger tools or voluntary incident-reporting systems, including identification of potential dosing and frequency errors that current methods are unequipped to detect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. EAGLE Monitors by Collecting Facts and Generating Obligations

    NASA Technical Reports Server (NTRS)

    Barrnger, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We present a rule-based framework, called EAGLE, that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. A monitor for an EAGLE formula checks if a finite trace of states satisfies the given formula. We present, in details, an algorithm for the synthesis of monitors for EAGLE. The algorithm is implemented as a Java application and involves novel techniques for rule definition, manipulation and execution. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace of states. Our initial experiments have been successful as EAGLE detected a previously unknown bug while testing a planetary rover controller.

  14. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  15. Investigating prior probabilities in a multiple hypothesis test for use in space domain awareness

    NASA Astrophysics Data System (ADS)

    Hardy, Tyler J.; Cain, Stephen C.

    2016-05-01

    The goal of this research effort is to improve Space Domain Awareness (SDA) capabilities of current telescope systems through improved detection algorithms. Ground-based optical SDA telescopes are often spatially under-sampled, or aliased. This fact negatively impacts the detection performance of traditionally proposed binary and correlation-based detection algorithms. A Multiple Hypothesis Test (MHT) algorithm has been previously developed to mitigate the effects of spatial aliasing. This is done by testing potential Resident Space Objects (RSOs) against several sub-pixel shifted Point Spread Functions (PSFs). A MHT has been shown to increase detection performance for the same false alarm rate. In this paper, the assumption of a priori probability used in a MHT algorithm is investigated. First, an analysis of the pixel decision space is completed to determine alternate hypothesis prior probabilities. These probabilities are then implemented into a MHT algorithm, and the algorithm is then tested against previous MHT algorithms using simulated RSO data. Results are reported with Receiver Operating Characteristic (ROC) curves and probability of detection, Pd, analysis.

  16. Robust crop and weed segmentation under uncontrolled outdoor illumination.

    PubMed

    Jeon, Hong Y; Tian, Lei F; Zhu, Heping

    2011-01-01

    An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA).

  17. Evolutionary Design of a Robotic Material Defect Detection System

    NASA Technical Reports Server (NTRS)

    Ballard, Gary; Howsman, Tom; Craft, Mike; ONeil, Daniel; Steincamp, Jim; Howell, Joe T. (Technical Monitor)

    2002-01-01

    During the post-flight inspection of SSME engines, several inaccessible regions must be disassembled to inspect for defects such as cracks, scratches, gouges, etc. An improvement to the inspection process would be the design and development of very small robots capable of penetrating these inaccessible regions and detecting the defects. The goal of this research was to utilize an evolutionary design approach for the robotic detection of these types of defects. A simulation and visualization tool was developed prior to receiving the hardware as a development test bed. A small, commercial off-the-shelf (COTS) robot was selected from several candidates as the proof of concept robot. The basic approach to detect the defects was to utilize Cadmium Sulfide (CdS) sensors to detect changes in contrast of an illuminated surface. A neural network, optimally designed utilizing a genetic algorithm, was employed to detect the presence of the defects (cracks). By utilization of the COTS robot and US sensors, the research successfully demonstrated that an evolutionarily designed neural network can detect the presence of surface defects.

  18. Robust Cell Detection of Histopathological Brain Tumor Images Using Sparse Reconstruction and Adaptive Dictionary Selection

    PubMed Central

    Su, Hai; Xing, Fuyong; Yang, Lin

    2016-01-01

    Successful diagnostic and prognostic stratification, treatment outcome prediction, and therapy planning depend on reproducible and accurate pathology analysis. Computer aided diagnosis (CAD) is a useful tool to help doctors make better decisions in cancer diagnosis and treatment. Accurate cell detection is often an essential prerequisite for subsequent cellular analysis. The major challenge of robust brain tumor nuclei/cell detection is to handle significant variations in cell appearance and to split touching cells. In this paper, we present an automatic cell detection framework using sparse reconstruction and adaptive dictionary learning. The main contributions of our method are: 1) A sparse reconstruction based approach to split touching cells; 2) An adaptive dictionary learning method used to handle cell appearance variations. The proposed method has been extensively tested on a data set with more than 2000 cells extracted from 32 whole slide scanned images. The automatic cell detection results are compared with the manually annotated ground truth and other state-of-the-art cell detection algorithms. The proposed method achieves the best cell detection accuracy with a F1 score = 0.96. PMID:26812706

  19. Superior Rhythm Discrimination With the SmartShock Technology Algorithm - Results of the Implantable Defibrillator With Enhanced Features and Settings for Reduction of Inaccurate Detection (DEFENSE) Trial.

    PubMed

    Oginosawa, Yasushi; Kohno, Ritsuko; Honda, Toshihiro; Kikuchi, Kan; Nozoe, Masatsugu; Uchida, Takayuki; Minamiguchi, Hitoshi; Sonoda, Koichiro; Ogawa, Masahiro; Ideguchi, Takeshi; Kizaki, Yoshihisa; Nakamura, Toshihiro; Oba, Kageyuki; Higa, Satoshi; Yoshida, Keiki; Tsunoda, Soichi; Fujino, Yoshihisa; Abe, Haruhiko

    2017-08-25

    Shocks delivered by implanted anti-tachyarrhythmia devices, even when appropriate, lower the quality of life and survival. The new SmartShock Technology ® (SST) discrimination algorithm was developed to prevent the delivery of inappropriate shock. This prospective, multicenter, observational study compared the rate of inaccurate detection of ventricular tachyarrhythmia using the SST vs. a conventional discrimination algorithm.Methods and Results:Recipients of implantable cardioverter defibrillators (ICD) or cardiac resynchronization therapy defibrillators (CRT-D) equipped with the SST algorithm were enrolled and followed up every 6 months. The tachycardia detection rate was set at ≥150 beats/min with the SST algorithm. The primary endpoint was the time to first inaccurate detection of ventricular tachycardia (VT) with conventional vs. the SST discrimination algorithm, up to 2 years of follow-up. Between March 2012 and September 2013, 185 patients (mean age, 64.0±14.9 years; men, 74%; secondary prevention indication, 49.5%) were enrolled at 14 Japanese medical centers. Inaccurate detection was observed in 32 patients (17.6%) with the conventional, vs. in 19 patients (10.4%) with the SST algorithm. SST significantly lowered the rate of inaccurate detection by dual chamber devices (HR, 0.50; 95% CI: 0.263-0.950; P=0.034). Compared with previous algorithms, the SST discrimination algorithm significantly lowered the rate of inaccurate detection of VT in recipients of dual-chamber ICD or CRT-D.

  20. Data-driven approach of CUSUM algorithm in temporal aberrant event detection using interactive web applications.

    PubMed

    Li, Ye; Whelan, Michael; Hobbs, Leigh; Fan, Wen Qi; Fung, Cecilia; Wong, Kenny; Marchand-Austin, Alex; Badiani, Tina; Johnson, Ian

    2016-06-27

    In 2014/2015, Public Health Ontario developed disease-specific, cumulative sum (CUSUM)-based statistical algorithms for detecting aberrant increases in reportable infectious disease incidence in Ontario. The objective of this study was to determine whether the prospective application of these CUSUM algorithms, based on historical patterns, have improved specificity and sensitivity compared to the currently used Early Aberration Reporting System (EARS) algorithm, developed by the US Centers for Disease Control and Prevention. A total of seven algorithms were developed for the following diseases: cyclosporiasis, giardiasis, influenza (one each for type A and type B), mumps, pertussis, invasive pneumococcal disease. Historical data were used as baseline to assess known outbreaks. Regression models were used to model seasonality and CUSUM was applied to the difference between observed and expected counts. An interactive web application was developed allowing program staff to directly interact with data and tune the parameters of CUSUM algorithms using their expertise on the epidemiology of each disease. Using these parameters, a CUSUM detection system was applied prospectively and the results were compared to the outputs generated by EARS. The outcome was the detection of outbreaks, or the start of a known seasonal increase and predicting the peak in activity. The CUSUM algorithms detected provincial outbreaks earlier than the EARS algorithm, identified the start of the influenza season in advance of traditional methods, and had fewer false positive alerts. Additionally, having staff involved in the creation of the algorithms improved their understanding of the algorithms and improved use in practice. Using interactive web-based technology to tune CUSUM improved the sensitivity and specificity of detection algorithms.

Top