Sample records for robust automatic high

  1. Robust output tracking control of a laboratory helicopter for automatic landing

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Lu, Geng; Zhong, Yisheng

    2014-11-01

    In this paper, robust output tracking control problem of a laboratory helicopter for automatic landing in high seas is investigated. The motion of the helicopter is required to synchronise with that of an oscillating platform, e.g. the deck of a vessel subject to wave-induced motions. A robust linear time-invariant output feedback controller consisting of a nominal controller and a robust compensator is designed. The robust compensator is introduced to restrain the influences of parametric uncertainties, nonlinearities and external disturbances. It is shown that robust stability and robust tracking property can be achieved simultaneously. Experimental results on the laboratory helicopter for automatic landing demonstrate the effectiveness of the designed control approach.

  2. SaRAD: a Simple and Robust Abbreviation Dictionary.

    PubMed

    Adar, Eytan

    2004-03-01

    Due to recent interest in the use of textual material to augment traditional experiments it has become necessary to automatically cluster, classify and filter natural language information. The Simple and Robust Abbreviation Dictionary (SaRAD) provides an easy to implement, high performance tool for the construction of a biomedical symbol dictionary. The algorithms, applied to the MEDLINE document set, result in a high quality dictionary and toolset to disambiguate abbreviation symbols automatically.

  3. Markov random field based automatic image alignment for electron tomography.

    PubMed

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  4. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  5. Automated contour detection in X-ray left ventricular angiograms using multiview active appearance models and dynamic programming.

    PubMed

    Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2006-09-01

    This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.

  6. Robust automatic measurement of 3D scanned models for the human body fat estimation.

    PubMed

    Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo

    2015-03-01

    In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.

  7. A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.

    PubMed

    Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang

    2009-01-01

    This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.

  8. A robust automatic phase correction method for signal dense spectra

    NASA Astrophysics Data System (ADS)

    Bao, Qingjia; Feng, Jiwen; Chen, Li; Chen, Fang; Liu, Zao; Jiang, Bin; Liu, Chaoyang

    2013-09-01

    A robust automatic phase correction method for Nuclear Magnetic Resonance (NMR) spectra is presented. In this work, a new strategy combining ‘coarse tuning' with ‘fine tuning' is introduced to correct various spectra accurately. In the ‘coarse tuning' procedure, a new robust baseline recognition method is proposed for determining the positions of the tail ends of the peaks, and then the preliminary phased spectra are obtained by minimizing the objective function based on the height difference of these tail ends. After the ‘coarse tuning', the peaks in the preliminary corrected spectra can be categorized into three classes: positive, negative, and distorted. Based on the classification result, a new custom negative penalty function used in the step of ‘fine tuning' is constructed to avoid the negative peak points in the spectra excluded in the negative peaks and distorted peaks. Finally, the fine phased spectra can be obtained by minimizing the custom negative penalty function. This method is proven to be very robust for it is tolerant to low signal-to-noise ratio, large baseline distortion and independent of the starting search points of phasing parameters. The experimental results on both 1D metabonomics spectra with over-crowded peaks and 2D spectra demonstrate the high efficiency of this automatic method.

  9. An automatic rat brain extraction method based on a deformable surface model.

    PubMed

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. AutoCellSeg: robust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques.

    PubMed

    Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert

    2018-05-08

    In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.

  11. Unified framework for automated iris segmentation using distantly acquired face images.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2012-09-01

    Remote human identification using iris biometrics has high civilian and surveillance applications and its success requires the development of robust segmentation algorithm to automatically extract the iris region. This paper presents a new iris segmentation framework which can robustly segment the iris images acquired using near infrared or visible illumination. The proposed approach exploits multiple higher order local pixel dependencies to robustly classify the eye region pixels into iris or noniris regions. Face and eye detection modules have been incorporated in the unified framework to automatically provide the localized eye region from facial image for iris segmentation. We develop robust postprocessing operations algorithm to effectively mitigate the noisy pixels caused by the misclassification. Experimental results presented in this paper suggest significant improvement in the average segmentation errors over the previously proposed approaches, i.e., 47.5%, 34.1%, and 32.6% on UBIRIS.v2, FRGC, and CASIA.v4 at-a-distance databases, respectively. The usefulness of the proposed approach is also ascertained from recognition experiments on three different publicly available databases.

  12. Automatic laser welding and milling with in situ inline coherent imaging.

    PubMed

    Webster, P J L; Wright, L G; Ji, Y; Galbraith, C M; Kinross, A W; Van Vlack, C; Fraser, J M

    2014-11-01

    Although new affordable high-power laser technologies enable many processing applications in science and industry, depth control remains a serious technical challenge. In this Letter we show that inline coherent imaging (ICI), with line rates up to 312 kHz and microsecond-duration capture times, is capable of directly measuring laser penetration depth, in a process as violent as kW-class keyhole welding. We exploit ICI's high speed, high dynamic range, and robustness to interference from other optical sources to achieve automatic, adaptive control of laser welding, as well as ablation, achieving 3D micron-scale sculpting in vastly different heterogeneous biological materials.

  13. Fully automatic segmentation of femurs with medullary canal definition in high and in low resolution CT scans.

    PubMed

    Almeida, Diogo F; Ruben, Rui B; Folgado, João; Fernandes, Paulo R; Audenaert, Emmanuel; Verhegghe, Benedict; De Beule, Matthieu

    2016-12-01

    Femur segmentation can be an important tool in orthopedic surgical planning. However, in order to overcome the need of an experienced user with extensive knowledge on the techniques, segmentation should be fully automatic. In this paper a new fully automatic femur segmentation method for CT images is presented. This method is also able to define automatically the medullary canal and performs well even in low resolution CT scans. Fully automatic femoral segmentation was performed adapting a template mesh of the femoral volume to medical images. In order to achieve this, an adaptation of the active shape model (ASM) technique based on the statistical shape model (SSM) and local appearance model (LAM) of the femur with a novel initialization method was used, to drive the template mesh deformation in order to fit the in-image femoral shape in a time effective approach. With the proposed method a 98% convergence rate was achieved. For high resolution CT images group the average error is less than 1mm. For the low resolution image group the results are also accurate and the average error is less than 1.5mm. The proposed segmentation pipeline is accurate, robust and completely user free. The method is robust to patient orientation, image artifacts and poorly defined edges. The results excelled even in CT images with a significant slice thickness, i.e., above 5mm. Medullary canal segmentation increases the geometric information that can be used in orthopedic surgical planning or in finite element analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census

    NASA Astrophysics Data System (ADS)

    Li, C.; Guo, P.; Liu, X.

    2017-09-01

    A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  15. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics1[OPEN

    PubMed Central

    Poeschl, Yvonne; Plötner, Romina

    2017-01-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. PMID:28931626

  16. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    PubMed

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  17. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  18. Automatic detection of larynx cancer from contrast-enhanced magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Doshi, Trushali; Soraghan, John; Grose, Derek; MacKenzie, Kenneth; Petropoulakis, Lykourgos

    2015-03-01

    Detection of larynx cancer from medical imaging is important for the quantification and for the definition of target volumes in radiotherapy treatment planning (RTP). Magnetic resonance imaging (MRI) is being increasingly used in RTP due to its high resolution and excellent soft tissue contrast. Manually detecting larynx cancer from sequential MRI is time consuming and subjective. The large diversity of cancer in terms of geometry, non-distinct boundaries combined with the presence of normal anatomical regions close to the cancer regions necessitates the development of automatic and robust algorithms for this task. A new automatic algorithm for the detection of larynx cancer from 2D gadoliniumenhanced T1-weighted (T1+Gd) MRI to assist clinicians in RTP is presented. The algorithm employs edge detection using spatial neighborhood information of pixels and incorporates this information in a fuzzy c-means clustering process to robustly separate different tissues types. Furthermore, it utilizes the information of the expected cancerous location for cancer regions labeling. Comparison of this automatic detection system with manual clinical detection on real T1+Gd axial MRI slices of 2 patients (24 MRI slices) with visible larynx cancer yields an average dice similarity coefficient of 0.78+/-0.04 and average root mean square error of 1.82+/-0.28 mm. Preliminary results show that this fully automatic system can assist clinicians in RTP by obtaining quantifiable and non-subjective repeatable detection results in a particular time-efficient and unbiased fashion.

  19. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics.

    PubMed

    Möller, Birgit; Poeschl, Yvonne; Plötner, Romina; Bürstenbinder, Katharina

    2017-11-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. © 2017 American Society of Plant Biologists. All Rights Reserved.

  20. Integrating hidden Markov model and PRAAT: a toolbox for robust automatic speech transcription

    NASA Astrophysics Data System (ADS)

    Kabir, A.; Barker, J.; Giurgiu, M.

    2010-09-01

    An automatic time-aligned phone transcription toolbox of English speech corpora has been developed. Especially the toolbox would be very useful to generate robust automatic transcription and able to produce phone level transcription using speaker independent models as well as speaker dependent models without manual intervention. The system is based on standard Hidden Markov Models (HMM) approach and it was successfully experimented over a large audiovisual speech corpus namely GRID corpus. One of the most powerful features of the toolbox is the increased flexibility in speech processing where the speech community would be able to import the automatic transcription generated by HMM Toolkit (HTK) into a popular transcription software, PRAAT, and vice-versa. The toolbox has been evaluated through statistical analysis on GRID data which shows that automatic transcription deviates by an average of 20 ms with respect to manual transcription.

  1. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.

  2. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  3. Machine Learning Algorithms for Automatic Classification of Marmoset Vocalizations

    PubMed Central

    Ribeiro, Sidarta; Pereira, Danillo R.; Papa, João P.; de Albuquerque, Victor Hugo C.

    2016-01-01

    Automatic classification of vocalization type could potentially become a useful tool for acoustic the monitoring of captive colonies of highly vocal primates. However, for classification to be useful in practice, a reliable algorithm that can be successfully trained on small datasets is necessary. In this work, we consider seven different classification algorithms with the goal of finding a robust classifier that can be successfully trained on small datasets. We found good classification performance (accuracy > 0.83 and F1-score > 0.84) using the Optimum Path Forest classifier. Dataset and algorithms are made publicly available. PMID:27654941

  4. Automatic Image Registration of Multimodal Remotely Sensed Data with Global Shearlet Features

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.

    2015-01-01

    Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.

  5. Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features

    PubMed Central

    Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.

    2017-01-01

    Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone. PMID:29123329

  6. Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Baochun; Huang, Cheng; Zhou, Shoujun

    Purpose: A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. Methods: The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-levelmore » active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods—3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration—are used to establish shape correspondence. Results: The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. Conclusions: The proposed automatic approach achieves robust, accurate, and fast liver segmentation for 3D CTce datasets. The AdaBoost voxel classifier can detect liver area quickly without errors and provides sufficient liver shape information for model initialization. The AdaBoost profile classifier achieves sufficient accuracy and greatly decreases segmentation time. These results show that the proposed segmentation method achieves a level of accuracy comparable to that of state-of-the-art automatic methods based on ASM.« less

  7. Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model.

    PubMed

    He, Baochun; Huang, Cheng; Sharp, Gregory; Zhou, Shoujun; Hu, Qingmao; Fang, Chihua; Fan, Yingfang; Jia, Fucang

    2016-05-01

    A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-level active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods-3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration-are used to establish shape correspondence. The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. The proposed automatic approach achieves robust, accurate, and fast liver segmentation for 3D CTce datasets. The AdaBoost voxel classifier can detect liver area quickly without errors and provides sufficient liver shape information for model initialization. The AdaBoost profile classifier achieves sufficient accuracy and greatly decreases segmentation time. These results show that the proposed segmentation method achieves a level of accuracy comparable to that of state-of-the-art automatic methods based on ASM.

  8. Robust and Effective Component-based Banknote Recognition for the Blind

    PubMed Central

    Hasanuzzaman, Faiz M.; Yang, Xiaodong; Tian, YingLi

    2012-01-01

    We develop a novel camera-based computer vision technology to automatically recognize banknotes for assisting visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate, 2) robustness: handles a variety of currency designs and bills in various conditions, 3) high efficiency: recognizes banknotes quickly, and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using Speeded Up Robust Features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system is evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm, achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users. PMID:22661884

  9. A novel automatic quantification method for high-content screening analysis of DNA double strand-break response.

    PubMed

    Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming

    2017-08-29

    High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.

  10. A robust and hierarchical approach for the automatic co-registration of intensity and visible images

    NASA Astrophysics Data System (ADS)

    González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José

    2012-09-01

    This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.

  11. Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing.

    PubMed

    Estrada, Rolando; Tomasi, Carlo; Cabrera, Michelle T; Wallace, David K; Freedman, Sharon F; Farsiu, Sina

    2011-10-01

    Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.

  12. Intra-operative adjustment of standard planes in C-arm CT image data.

    PubMed

    Brehler, Michael; Görres, Joseph; Franke, Jochen; Barth, Karl; Vetter, Sven Y; Grützner, Paul A; Meinzer, Hans-Peter; Wolf, Ivo; Nabers, Diana

    2016-03-01

    With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.

  13. A Robust Automatic Ionospheric O/X Mode Separation Technique for Vertical Incidence Sounders

    NASA Astrophysics Data System (ADS)

    Harris, T. J.; Pederick, L. H.

    2017-12-01

    The sounding of the ionosphere by a vertical incidence sounder (VIS) is the oldest and most common technique for determining the state of the ionosphere. The automatic extraction of relevant ionospheric parameters from the ionogram image, referred to as scaling, is important for the effective utilization of data from large ionospheric sounder networks. Due to the Earth's magnetic field, the ionosphere is birefringent at radio frequencies, so a VIS will typically see two distinct returns for each frequency. For the automatic scaling of ionograms, it is highly desirable to be able to separate the two modes. Defence Science and Technology Group has developed a new VIS solution which is based on direct digital receiver technology and includes an algorithm to separate the O and X modes. This algorithm can provide high-quality separation even in difficult ionospheric conditions. In this paper we describe the algorithm and demonstrate its consistency and reliability in successfully separating 99.4% of the ionograms during a 27 day experimental campaign under sometimes demanding ionospheric conditions.

  14. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer. This presentation will demonstrate this new technology through linear and non-linear shallow water test cases. The exceptionally simple model syntax will be highlighted and the correctness of the resulting adjoint simulations will be demonstrated using rigorous convergence tests.

  15. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    PubMed

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.

  16. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem

    PubMed Central

    Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683

  17. Automatic speech recognition using a predictive echo state network classifier.

    PubMed

    Skowronski, Mark D; Harris, John G

    2007-04-01

    We have combined an echo state network (ESN) with a competitive state machine framework to create a classification engine called the predictive ESN classifier. We derive the expressions for training the predictive ESN classifier and show that the model was significantly more noise robust compared to a hidden Markov model in noisy speech classification experiments by 8+/-1 dB signal-to-noise ratio. The simple training algorithm and noise robustness of the predictive ESN classifier make it an attractive classification engine for automatic speech recognition.

  18. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    PubMed

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Automatic cardiac LV segmentation in MRI using modified graph cuts with smoothness and interslice constraints.

    PubMed

    Albà, Xènia; Figueras I Ventura, Rosa M; Lekadir, Karim; Tobon-Gomez, Catalina; Hoogendoorn, Corné; Frangi, Alejandro F

    2014-12-01

    Magnetic resonance imaging (MRI), specifically late-enhanced MRI, is the standard clinical imaging protocol to assess cardiac viability. Segmentation of myocardial walls is a prerequisite for this assessment. Automatic and robust multisequence segmentation is required to support processing massive quantities of data. A generic rule-based framework to automatically segment the left ventricle myocardium is presented here. We use intensity information, and include shape and interslice smoothness constraints, providing robustness to subject- and study-specific changes. Our automatic initialization considers the geometrical and appearance properties of the left ventricle, as well as interslice information. The segmentation algorithm uses a decoupled, modified graph cut approach with control points, providing a good balance between flexibility and robustness. The method was evaluated on late-enhanced MRI images from a 20-patient in-house database, and on cine-MRI images from a 15-patient open access database, both using as reference manually delineated contours. Segmentation agreement, measured using the Dice coefficient, was 0.81±0.05 and 0.92±0.04 for late-enhanced MRI and cine-MRI, respectively. The method was also compared favorably to a three-dimensional Active Shape Model approach. The experimental validation with two magnetic resonance sequences demonstrates increased accuracy and versatility. © 2013 Wiley Periodicals, Inc.

  20. Combining the AFLOW GIBBS and elastic libraries to efficiently and robustly screen thermomechanical properties of solids

    NASA Astrophysics Data System (ADS)

    Toher, Cormac; Oses, Corey; Plata, Jose J.; Hicks, David; Rose, Frisco; Levy, Ohad; de Jong, Maarten; Asta, Mark; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano

    2017-06-01

    Thorough characterization of the thermomechanical properties of materials requires difficult and time-consuming experiments. This severely limits the availability of data and is one of the main obstacles for the development of effective accelerated materials design strategies. The rapid screening of new potential materials requires highly integrated, sophisticated, and robust computational approaches. We tackled the challenge by developing an automated, integrated workflow with robust error-correction within the AFLOW framework which combines the newly developed "Automatic Elasticity Library" with the previously implemented GIBBS method. The first extracts the mechanical properties from automatic self-consistent stress-strain calculations, while the latter employs those mechanical properties to evaluate the thermodynamics within the Debye model. This new thermoelastic workflow is benchmarked against a set of 74 experimentally characterized systems to pinpoint a robust computational methodology for the evaluation of bulk and shear moduli, Poisson ratios, Debye temperatures, Grüneisen parameters, and thermal conductivities of a wide variety of materials. The effect of different choices of equations of state and exchange-correlation functionals is examined and the optimum combination of properties for the Leibfried-Schlömann prediction of thermal conductivity is identified, leading to improved agreement with experimental results than the GIBBS-only approach. The framework has been applied to the AFLOW.org data repositories to compute the thermoelastic properties of over 3500 unique materials. The results are now available online by using an expanded version of the REST-API described in the Appendix.

  1. Fully automatic segmentation of the femur from 3D-CT images using primitive shape recognition and statistical shape models.

    PubMed

    Ben Younes, Lassad; Nakajima, Yoshikazu; Saito, Toki

    2014-03-01

    Femur segmentation is well established and widely used in computer-assisted orthopedic surgery. However, most of the robust segmentation methods such as statistical shape models (SSM) require human intervention to provide an initial position for the SSM. In this paper, we propose to overcome this problem and provide a fully automatic femur segmentation method for CT images based on primitive shape recognition and SSM. Femur segmentation in CT scans was performed using primitive shape recognition based on a robust algorithm such as the Hough transform and RANdom SAmple Consensus. The proposed method is divided into 3 steps: (1) detection of the femoral head as sphere and the femoral shaft as cylinder in the SSM and the CT images, (2) rigid registration between primitives of SSM and CT image to initialize the SSM into the CT image, and (3) fitting of the SSM to the CT image edge using an affine transformation followed by a nonlinear fitting. The automated method provided good results even with a high number of outliers. The difference of segmentation error between the proposed automatic initialization method and a manual initialization method is less than 1 mm. The proposed method detects primitive shape position to initialize the SSM into the target image. Based on primitive shapes, this method overcomes the problem of inter-patient variability. Moreover, the results demonstrate that our method of primitive shape recognition can be used for 3D SSM initialization to achieve fully automatic segmentation of the femur.

  2. Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.

    PubMed

    Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D

    2016-08-01

    Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Segmentation of Nerve Bundles and Ganglia in Spine MRI Using Particle Filters

    PubMed Central

    Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina

    2011-01-01

    Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bézier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation. PMID:22003741

  4. Automatic multi-organ segmentation using learning-based segmentation and level set optimization.

    PubMed

    Kohlberger, Timo; Sofka, Michal; Zhang, Jingdan; Birkbeck, Neil; Wetzl, Jens; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin

    2011-01-01

    We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.

  5. Segmentation of nerve bundles and ganglia in spine MRI using particle filters.

    PubMed

    Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina

    2011-01-01

    Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bézier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation.

  6. Wireless sensor network effectively controls center pivot irrigation of sorghum

    USDA-ARS?s Scientific Manuscript database

    Robust automatic irrigation scheduling has been demonstrated using wired sensors and sensor network systems with subsurface drip and moving irrigation systems. However, there are limited studies that report on crop yield and water use efficiency resulting from the use of wireless networks to automat...

  7. Automatic atlas-based three-label cartilage segmentation from MR knee images

    PubMed Central

    Shan, Liang; Zach, Christopher; Charles, Cecil; Niethammer, Marc

    2016-01-01

    Osteoarthritis (OA) is the most common form of joint disease and often characterized by cartilage changes. Accurate quantitative methods are needed to rapidly screen large image databases to assess changes in cartilage morphology. We therefore propose a new automatic atlas-based cartilage segmentation method for future automatic OA studies. Atlas-based segmentation methods have been demonstrated to be robust and accurate in brain imaging and therefore also hold high promise to allow for reliable and high-quality segmentations of cartilage. Nevertheless, atlas-based methods have not been well explored for cartilage segmentation. A particular challenge is the thinness of cartilage, its relatively small volume in comparison to surrounding tissue and the difficulty to locate cartilage interfaces – for example the interface between femoral and tibial cartilage. This paper focuses on the segmentation of femoral and tibial cartilage, proposing a multi-atlas segmentation strategy with non-local patch-based label fusion which can robustly identify candidate regions of cartilage. This method is combined with a novel three-label segmentation method which guarantees the spatial separation of femoral and tibial cartilage, and ensures spatial regularity while preserving the thin cartilage shape through anisotropic regularization. Our segmentation energy is convex and therefore guarantees globally optimal solutions. We perform an extensive validation of the proposed method on 706 images of the Pfizer Longitudinal Study. Our validation includes comparisons of different atlas segmentation strategies, different local classifiers, and different types of regularizers. To compare to other cartilage segmentation approaches we validate based on the 50 images of the SKI10 dataset. PMID:25128683

  8. Noise-robust unsupervised spike sorting based on discriminative subspace learning with outlier handling.

    PubMed

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2017-06-01

    Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.

  9. Noise-robust unsupervised spike sorting based on discriminative subspace learning with outlier handling

    NASA Astrophysics Data System (ADS)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2017-06-01

    Objective. Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. Approach. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Main results. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. Significance. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.

  10. Galaxy And Mass Assembly (GAMA): AUTOZ spectral redshift measurements, confidence and errors

    NASA Astrophysics Data System (ADS)

    Baldry, I. K.; Alpaslan, M.; Bauer, A. E.; Bland-Hawthorn, J.; Brough, S.; Cluver, M. E.; Croom, S. M.; Davies, L. J. M.; Driver, S. P.; Gunawardhana, M. L. P.; Holwerda, B. W.; Hopkins, A. M.; Kelvin, L. S.; Liske, J.; López-Sánchez, Á. R.; Loveday, J.; Norberg, P.; Peacock, J.; Robotham, A. S. G.; Taylor, E. N.

    2014-07-01

    The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over 230 000 targets using the Anglo-Australian Telescope. To homogenize the redshift measurements and improve the reliability, a fully automatic redshift code was developed (AUTOZ). The measurements were made using a cross-correlation method for both the absorption- and the emission-line spectra. Large deviations in the high-pass-filtered spectra are partially clipped in order to be robust against uncorrected artefacts and to reduce the weight given to single-line matches. A single figure of merit (FOM) was developed that puts all template matches on to a similar confidence scale. The redshift confidence as a function of the FOM was fitted with a tanh function using a maximum likelihood method applied to repeat observations of targets. The method could be adapted to provide robust automatic redshifts for other large galaxy redshift surveys. For the GAMA survey, there was a substantial improvement in the reliability of assigned redshifts and in the lowering of redshift uncertainties with a median velocity uncertainty of 33 km s-1.

  11. Performance of a wireless sensor network for crop monitoring and irrigation control

    USDA-ARS?s Scientific Manuscript database

    Robust automatic irrigation scheduling has been demonstrated using wired sensors and sensor network systems with subsurface drip and moving irrigation systems. However, there are limited studies that report on crop yield and water use efficiency resulting from the use of wireless networks to automat...

  12. Automatic segmentation of brain MRIs and mapping neuroanatomy across the human lifespan

    NASA Astrophysics Data System (ADS)

    Keihaninejad, Shiva; Heckemann, Rolf A.; Gousias, Ioannis S.; Rueckert, Daniel; Aljabar, Paul; Hajnal, Joseph V.; Hammers, Alexander

    2009-02-01

    A robust model for the automatic segmentation of human brain images into anatomically defined regions across the human lifespan would be highly desirable, but such structural segmentations of brain MRI are challenging due to age-related changes. We have developed a new method, based on established algorithms for automatic segmentation of young adults' brains. We used prior information from 30 anatomical atlases, which had been manually segmented into 83 anatomical structures. Target MRIs came from 80 subjects (~12 individuals/decade) from 20 to 90 years, with equal numbers of men, women; data from two different scanners (1.5T, 3T), using the IXI database. Each of the adult atlases was registered to each target MR image. By using additional information from segmentation into tissue classes (GM, WM and CSF) to initialise the warping based on label consistency similarity before feeding this into the previous normalised mutual information non-rigid registration, the registration became robust enough to accommodate atrophy and ventricular enlargement with age. The final segmentation was obtained by combination of the 30 propagated atlases using decision fusion. Kernel smoothing was used for modelling the structural volume changes with aging. Example linear correlation coefficients with age were, for lateral ventricular volume, rmale=0.76, rfemale=0.58 and, for hippocampal volume, rmale=-0.6, rfemale=-0.4 (allρ<0.01).

  13. High Performance Automatic Character Skinning Based on Projection Distance

    NASA Astrophysics Data System (ADS)

    Li, Jun; Lin, Feng; Liu, Xiuling; Wang, Hongrui

    2018-03-01

    Skeleton-driven-deformation methods have been commonly used in the character deformations. The process of painting skin weights for character deformation is a long-winded task requiring manual tweaking. We present a novel method to calculate skinning weights automatically from 3D human geometric model and corresponding skeleton. The method first, groups each mesh vertex of 3D human model to a skeleton bone by the minimum distance from a mesh vertex to each bone. Secondly, calculates each vertex's weights to the adjacent bones by the vertex's projection point distance to the bone joints. Our method's output can not only be applied to any kind of skeleton-driven deformation, but also to motion capture driven (mocap-driven) deformation. Experiments results show that our method not only has strong generality and robustness, but also has high performance.

  14. Robust automatic control system of vessel descent-rise device for plant with distributed parameters “cable – towed underwater vehicle”

    NASA Astrophysics Data System (ADS)

    Chupina, K. V.; Kataev, E. V.; Khannanov, A. M.; Korshunov, V. N.; Sennikov, I. A.

    2018-05-01

    The paper is devoted to a problem of synthesis of the robust control system for a distributed parameters plant. The vessel descent-rise device has a heave compensation function for stabilization of the towed underwater vehicle on a set depth. A sea state code, parameters of the underwater vehicle and cable vary during underwater operations, the vessel heave is a stochastic process. It means that the plant and external disturbances have uncertainty. That is why it is necessary to use the robust theory for synthesis of an automatic control system, but without use of traditional methods of optimization, because this cable has distributed parameters. The offered technique has allowed one to design an effective control system for stabilization of immersion depth of the towed underwater vehicle for various degrees of sea roughness and to provide its robustness to deviations of parameters of the vehicle and cable’s length.

  15. An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas

    NASA Astrophysics Data System (ADS)

    Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio

    2008-07-01

    Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.

  16. Automatic identification of mobile and rigid substructures in molecular dynamics simulations and fractional structural fluctuation analysis.

    PubMed

    Martínez, Leandro

    2015-01-01

    The analysis of structural mobility in molecular dynamics plays a key role in data interpretation, particularly in the simulation of biomolecules. The most common mobility measures computed from simulations are the Root Mean Square Deviation (RMSD) and Root Mean Square Fluctuations (RMSF) of the structures. These are computed after the alignment of atomic coordinates in each trajectory step to a reference structure. This rigid-body alignment is not robust, in the sense that if a small portion of the structure is highly mobile, the RMSD and RMSF increase for all atoms, resulting possibly in poor quantification of the structural fluctuations and, often, to overlooking important fluctuations associated to biological function. The motivation of this work is to provide a robust measure of structural mobility that is practical, and easy to interpret. We propose a Low-Order-Value-Optimization (LOVO) strategy for the robust alignment of the least mobile substructures in a simulation. These substructures are automatically identified by the method. The algorithm consists of the iterative superposition of the fraction of structure displaying the smallest displacements. Therefore, the least mobile substructures are identified, providing a clearer picture of the overall structural fluctuations. Examples are given to illustrate the interpretative advantages of this strategy. The software for performing the alignments was named MDLovoFit and it is available as free-software at: http://leandro.iqm.unicamp.br/mdlovofit.

  17. Automatic Identification of Mobile and Rigid Substructures in Molecular Dynamics Simulations and Fractional Structural Fluctuation Analysis

    PubMed Central

    Martínez, Leandro

    2015-01-01

    The analysis of structural mobility in molecular dynamics plays a key role in data interpretation, particularly in the simulation of biomolecules. The most common mobility measures computed from simulations are the Root Mean Square Deviation (RMSD) and Root Mean Square Fluctuations (RMSF) of the structures. These are computed after the alignment of atomic coordinates in each trajectory step to a reference structure. This rigid-body alignment is not robust, in the sense that if a small portion of the structure is highly mobile, the RMSD and RMSF increase for all atoms, resulting possibly in poor quantification of the structural fluctuations and, often, to overlooking important fluctuations associated to biological function. The motivation of this work is to provide a robust measure of structural mobility that is practical, and easy to interpret. We propose a Low-Order-Value-Optimization (LOVO) strategy for the robust alignment of the least mobile substructures in a simulation. These substructures are automatically identified by the method. The algorithm consists of the iterative superposition of the fraction of structure displaying the smallest displacements. Therefore, the least mobile substructures are identified, providing a clearer picture of the overall structural fluctuations. Examples are given to illustrate the interpretative advantages of this strategy. The software for performing the alignments was named MDLovoFit and it is available as free-software at: http://leandro.iqm.unicamp.br/mdlovofit PMID:25816325

  18. Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity

    PubMed Central

    Abbott, L. F.; Sompolinsky, Haim

    2017-01-01

    Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for given statistics of afferent activations. Previous work has shown that balanced networks amplify spatiotemporal variability and account for observed asynchronous irregular states. Here we present a distinct type of balanced network that amplifies small changes in the impinging signals and emerges automatically from learning to perform neuronal and network functions robustly. PMID:29042519

  19. Parameter-tolerant design of high contrast gratings

    NASA Astrophysics Data System (ADS)

    Chevallier, Christyves; Fressengeas, Nicolas; Jacquet, Joel; Almuneau, Guilhem; Laaroussi, Youness; Gauthier-Lafaye, Olivier; Cerutti, Laurent; Genty, Frédéric

    2015-02-01

    This work is devoted to the design of high contrast grating mirrors taking into account the technological constraints and tolerance of fabrication. First, a global optimization algorithm has been combined to a numerical analysis of grating structures (RCWA) to automatically design HCG mirrors. Then, the tolerances of the grating dimensions have been precisely studied to develop a robust optimization algorithm with which high contrast gratings, exhibiting not only a high efficiency but also large tolerance values, could be designed. Finally, several structures integrating previously designed HCGs has been simulated to validate and illustrate the interest of such gratings.

  20. Optimization and automation of quantitative NMR data extraction.

    PubMed

    Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos

    2013-06-18

    NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.

  1. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    PubMed

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction.

  2. Robust automatic P-phase picking: an on-line implementation in the analysis of broadband seismogram recordings

    NASA Astrophysics Data System (ADS)

    Sleeman, Reinoud; van Eck, Torild

    1999-06-01

    The onset of a seismic signal is determined through joint AR modeling of the noise and the seismic signal, and the application of the Akaike Information Criterion (AIC) using the onset time as parameter. This so-called AR-AIC phase picker has been tested successfully and implemented on the Z-component of the broadband station HGN to provide automatic P-phase picks for a rapid warning system. The AR-AIC picker is shown to provide accurate and robust automatic picks on a large experimental database. Out of 1109 P-phase onsets with signal-to-noise ratio (SNR) above 1 from local, regional and teleseismic earthquakes, our implementation detects 71% and gives a mean difference with manual picks of 0.1 s. An optimal version of the well-established picker of Baer and Kradolfer [Baer, M., Kradolfer, U., An automatic phase picker for local and teleseismic events, Bull. Seism. Soc. Am. 77 (1987) 1437-1445] detects less than 41% and gives a mean difference with manual picks of 0.3 s using the same dataset.

  3. Robust Spacecraft Component Detection in Point Clouds.

    PubMed

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  4. Robust Spacecraft Component Detection in Point Clouds

    PubMed Central

    Wei, Quanmao; Jiang, Zhiguo

    2018-01-01

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density. PMID:29561828

  5. Real-time measurement system for evaluation of the carotid intima-media thickness with a robust edge operator.

    PubMed

    Faita, Francesco; Gemignani, Vincenzo; Bianchini, Elisabetta; Giannarelli, Chiara; Ghiadoni, Lorenzo; Demi, Marcello

    2008-09-01

    The purpose of this report is to describe an automatic real-time system for evaluation of the carotid intima-media thickness (CIMT) characterized by 3 main features: minimal interobserver and intraobserver variability, real-time capabilities, and great robustness against noise. One hundred fifty carotid B-mode ultrasound images were used to validate the system. Two skilled operators were involved in the analysis. Agreement with the gold standard, defined as the mean of 2 manual measurements of a skilled operator, and the interobserver and intraobserver variability were quantitatively evaluated by regression analysis and Bland-Altman statistics. The automatic measure of the CIMT showed a mean bias +/- SD of 0.001 +/- 0.035 mm toward the manual measurement. The intraobserver variability, evaluated with Bland-Altman plots, showed a bias that was not significantly different from 0, whereas the SD of the differences was greater in the manual analysis (0.038 mm) than in the automatic analysis (0.006 mm). For interobserver variability, the automatic measurement had a bias that was not significantly different from 0, with a satisfactory SD of the differences (0.01 mm), whereas in the manual measurement, a little bias was present (0.012 mm), and the SD of the differences was noticeably greater (0.044 mm). The CIMT has been accepted as a noninvasive marker of early vascular alteration. At present, the manual approach is largely used to estimate CIMT values. However, that method is highly operator dependent and time-consuming. For these reasons, we developed a new system for the CIMT measurement that conjugates precision with real-time analysis, thus providing considerable advantages in clinical practice.

  6. Non-rigid registration of 3D ultrasound for neurosurgery using automatic feature detection and matching.

    PubMed

    Machado, Inês; Toews, Matthew; Luo, Jie; Unadkat, Prashin; Essayed, Walid; George, Elizabeth; Teodoro, Pedro; Carvalho, Herculano; Martins, Jorge; Golland, Polina; Pieper, Steve; Frisken, Sarah; Golby, Alexandra; Wells, William

    2018-06-04

    The brain undergoes significant structural change over the course of neurosurgery, including highly nonlinear deformation and resection. It can be informative to recover the spatial mapping between structures identified in preoperative surgical planning and the intraoperative state of the brain. We present a novel feature-based method for achieving robust, fully automatic deformable registration of intraoperative neurosurgical ultrasound images. A sparse set of local image feature correspondences is first estimated between ultrasound image pairs, after which rigid, affine and thin-plate spline models are used to estimate dense mappings throughout the image. Correspondences are derived from 3D features, distinctive generic image patterns that are automatically extracted from 3D ultrasound images and characterized in terms of their geometry (i.e., location, scale, and orientation) and a descriptor of local image appearance. Feature correspondences between ultrasound images are achieved based on a nearest-neighbor descriptor matching and probabilistic voting model similar to the Hough transform. Experiments demonstrate our method on intraoperative ultrasound images acquired before and after opening of the dura mater, during resection and after resection in nine clinical cases. A total of 1620 automatically extracted 3D feature correspondences were manually validated by eleven experts and used to guide the registration. Then, using manually labeled corresponding landmarks in the pre- and post-resection ultrasound images, we show that our feature-based registration reduces the mean target registration error from an initial value of 3.3 to 1.5 mm. This result demonstrates that the 3D features promise to offer a robust and accurate solution for 3D ultrasound registration and to correct for brain shift in image-guided neurosurgery.

  7. Key features for ATA / ATR database design in missile systems

    NASA Astrophysics Data System (ADS)

    Özertem, Kemal Arda

    2017-05-01

    Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.

  8. Robust multiple cue fusion-based high-speed and nonrigid object tracking algorithm for short track speed skating

    NASA Astrophysics Data System (ADS)

    Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min

    2016-01-01

    This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.

  9. Hierarchical layered and semantic-based image segmentation using ergodicity map

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.

  10. Automatic vibration mode selection and excitation; combining modal filtering with autoresonance

    NASA Astrophysics Data System (ADS)

    Davis, Solomon; Bucher, Izhak

    2018-02-01

    Autoresonance is a well-known nonlinear feedback method used for automatically exciting a system at its natural frequency. Though highly effective in exciting single degree of freedom systems, in its simplest form it lacks a mechanism for choosing the mode of excitation when more than one is present. In this case a single mode will be automatically excited, but this mode cannot be chosen or changed. In this paper a new method for automatically exciting a general second-order system at any desired natural frequency using Autoresonance is proposed. The article begins by deriving a concise expression for the frequency of the limit cycle induced by an Autoresonance feedback loop enclosed on the system. The expression is based on modal decomposition, and provides valuable insight into the behavior of a system controlled in this way. With this expression, a method for selecting and exciting a desired mode naturally follows by combining Autoresonance with Modal Filtering. By taking various linear combinations of the sensor signals, by orthogonality one can "filter out" all the unwanted modes effectively. The desired mode's natural frequency is then automatically reflected in the limit cycle. In experiment the technique has proven extremely robust, even if the amplitude of the desired mode is significantly smaller than the others and the modal filters are greatly inaccurate.

  11. An intelligent system for real time automatic defect inspection on specular coated surfaces

    NASA Astrophysics Data System (ADS)

    Li, Jinhua; Parker, Johné M.; Hou, Zhen

    2005-07-01

    Product visual inspection is still performed manually or semi automatically in most industries from simple ceramic tile grading to complex automotive body panel paint defect and surface quality inspection. Moreover, specular surfaces present additional challenge to conventional vision systems due to specular reflections, which may mask the true location of objects and lead to incorrect measurements. There are some sophisticated visual inspection methods developed in recent years. Unfortunately, most of them are highly computational. Systems built on those methods are either inapplicable or very costly to achieve real time inspection. In this paper, we describe an integrated low-cost intelligent system developed to automatically capture, extract, and segment defects on specular surfaces with uniform color coatings. The system inspects and locates regular surface defects with lateral dimensions as small as a millimeter. The proposed system is implemented on a group of smart cameras using its on-board processing ability to achieve real time inspection. The experimental results on real test panels demonstrate the effectiveness and robustness of proposed system.

  12. Automatic Network Fingerprinting through Single-Node Motifs

    PubMed Central

    Echtermeyer, Christoph; da Fontoura Costa, Luciano; Rodrigues, Francisco A.; Kaiser, Marcus

    2011-01-01

    Complex networks have been characterised by their specific connectivity patterns (network motifs), but their building blocks can also be identified and described by node-motifs—a combination of local network features. One technique to identify single node-motifs has been presented by Costa et al. (L. D. F. Costa, F. A. Rodrigues, C. C. Hilgetag, and M. Kaiser, Europhys. Lett., 87, 1, 2009). Here, we first suggest improvements to the method including how its parameters can be determined automatically. Such automatic routines make high-throughput studies of many networks feasible. Second, the new routines are validated in different network-series. Third, we provide an example of how the method can be used to analyse network time-series. In conclusion, we provide a robust method for systematically discovering and classifying characteristic nodes of a network. In contrast to classical motif analysis, our approach can identify individual components (here: nodes) that are specific to a network. Such special nodes, as hubs before, might be found to play critical roles in real-world networks. PMID:21297963

  13. A Plane Target Detection Algorithm in Remote Sensing Images based on Deep Learning Network Technology

    NASA Astrophysics Data System (ADS)

    Shuxin, Li; Zhilong, Zhang; Biao, Li

    2018-01-01

    Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.

  14. Feasibility of online IMPT adaptation using fast, automatic and robust dose restoration

    NASA Astrophysics Data System (ADS)

    Bernatowicz, Kinga; Geets, Xavier; Barragan, Ana; Janssens, Guillaume; Souris, Kevin; Sterpin, Edmond

    2018-04-01

    Intensity-modulated proton therapy (IMPT) offers excellent dose conformity and healthy tissue sparing, but it can be substantially compromised in the presence of anatomical changes. A major dosimetric effect is caused by density changes, which alter the planned proton range in the patient. Three different methods, which automatically restore an IMPT plan dose on a daily CT image were implemented and compared: (1) simple dose restoration (DR) using optimization objectives of the initial plan, (2) voxel-wise dose restoration (vDR), and (3) isodose volume dose restoration (iDR). Dose restorations were calculated for three different clinical cases, selected to test different capabilities of the restoration methods: large range adaptation, complex dose distributions and robust re-optimization. All dose restorations were obtained in less than 5 min, without manual adjustments of the optimization settings. The evaluation of initial plans on repeated CTs showed large dose distortions, which were substantially reduced after restoration. In general, all dose restoration methods improved DVH-based scores in propagated target volumes and OARs. Analysis of local dose differences showed that, although all dose restorations performed similarly in high dose regions, iDR restored the initial dose with higher precision and accuracy in the whole patient anatomy. Median dose errors decreased from 13.55 Gy in distorted plan to 9.75 Gy (vDR), 6.2 Gy (DR) and 4.3 Gy (iDR). High quality dose restoration is essential to minimize or eventually by-pass the physician approval of the restored plan, as long as dose stability can be assumed. Motion (as well as setup and range uncertainties) can be taken into account by including robust optimization in the dose restoration. Restoring clinically-approved dose distribution on repeated CTs does not require new ROI segmentation and is compatible with an online adaptive workflow.

  15. Liquid chromatography-mass spectrometry platform for both small neurotransmitters and neuropeptides in blood, with automatic and robust solid phase extraction

    NASA Astrophysics Data System (ADS)

    Johnsen, Elin; Leknes, Siri; Wilson, Steven Ray; Lundanes, Elsa

    2015-03-01

    Neurons communicate via chemical signals called neurotransmitters (NTs). The numerous identified NTs can have very different physiochemical properties (solubility, charge, size etc.), so quantification of the various NT classes traditionally requires several analytical platforms/methodologies. We here report that a diverse range of NTs, e.g. peptides oxytocin and vasopressin, monoamines adrenaline and serotonin, and amino acid GABA, can be simultaneously identified/measured in small samples, using an analytical platform based on liquid chromatography and high-resolution mass spectrometry (LC-MS). The automated platform is cost-efficient as manual sample preparation steps and one-time-use equipment are kept to a minimum. Zwitter-ionic HILIC stationary phases were used for both on-line solid phase extraction (SPE) and liquid chromatography (capillary format, cLC). This approach enabled compounds from all NT classes to elute in small volumes producing sharp and symmetric signals, and allowing precise quantifications of small samples, demonstrated with whole blood (100 microliters per sample). An additional robustness-enhancing feature is automatic filtration/filter back-flushing (AFFL), allowing hundreds of samples to be analyzed without any parts needing replacement. The platform can be installed by simple modification of a conventional LC-MS system.

  16. Development of a novel constellation based landmark detection algorithm

    NASA Astrophysics Data System (ADS)

    Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.

    2013-03-01

    Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.

  17. Intelligent and robust optimization frameworks for smart grids

    NASA Astrophysics Data System (ADS)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic optimization algorithm for smart grid automatic generation control.

  18. Robust fusion-based processing for military polarimetric imaging systems

    NASA Astrophysics Data System (ADS)

    Hickman, Duncan L.; Smith, Moira I.; Kim, Kyung Su; Choi, Hyun-Jin

    2017-05-01

    Polarisation information within a scene can be exploited in military systems to give enhanced automatic target detection and recognition (ATD/R) performance. However, the performance gain achieved is highly dependent on factors such as the geometry, viewing conditions, and the surface finish of the target. Such performance sensitivities are highly undesirable in many tactical military systems where operational conditions can vary significantly and rapidly during a mission. Within this paper, a range of processing architectures and fusion methods is considered in terms of their practical viability and operational robustness for systems requiring ATD/R. It is shown that polarisation information can give useful performance gains but, to retained system robustness, the introduction of polarimetric processing should be done in such a way as to not compromise other discriminatory scene information in the spectral and spatial domains. The analysis concludes that polarimetric data can be effectively integrated with conventional intensity-based ATD/R by either adapting the ATD/R processing function based on the scene polarisation or else by detection-level fusion. Both of these approaches avoid the introduction of processing bottlenecks and limit the impact of processing on system latency.

  19. Multi-stage robust scheme for citrus identification from high resolution airborne images

    NASA Astrophysics Data System (ADS)

    Amorós-López, Julia; Izquierdo Verdiguier, Emma; Gómez-Chova, Luis; Muñoz-Marí, Jordi; Zoilo Rodríguez-Barreiro, Jorge; Camps-Valls, Gustavo; Calpe-Maravilla, Javier

    2008-10-01

    Identification of land cover types is one of the most critical activities in remote sensing. Nowadays, managing land resources by using remote sensing techniques is becoming a common procedure to speed up the process while reducing costs. However, data analysis procedures should satisfy the accuracy figures demanded by institutions and governments for further administrative actions. This paper presents a methodological scheme to update the citrus Geographical Information Systems (GIS) of the Comunidad Valenciana autonomous region, Spain). The proposed approach introduces a multi-stage automatic scheme to reduce visual photointerpretation and ground validation tasks. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution (VHR) images (0.5m) acquired in the visible and near infrared. Next, several automatic classifiers (decision trees, multilayer perceptron, and support vector machines) are trained and combined to improve the final accuracy of the results. The proposed strategy fulfills the high accuracy demanded by policy makers by means of combining automatic classification methods with visual photointerpretation available resources. A level of confidence based on the agreement between classifiers allows us an effective management by fixing the quantity of parcels to be reviewed. The proposed methodology can be applied to similar problems and applications.

  20. A distributed automatic target recognition system using multiple low resolution sensors

    NASA Astrophysics Data System (ADS)

    Yue, Zhanfeng; Lakshmi Narasimha, Pramod; Topiwala, Pankaj

    2008-04-01

    In this paper, we propose a multi-agent system which uses swarming techniques to perform high accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can co-operatively share the information from low-resolution images of different looks and use this information to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV) systems-based approach is proposed which integrates the processing capabilities, combines detection reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual sensor system performance levels. We employ real-time block-based motion analysis and compensation scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR system utilizes the homographies between the sensors induced by the ground plane to overlap the local observation with the received images from other UAVs. The motion of the UAVs distorts estimated homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address this, by using the homography decomposition and the ground plane surface estimation.

  1. Affordable non-traditional source data mining for context assessment to improve distributed fusion system robustness

    NASA Astrophysics Data System (ADS)

    Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael

    2013-05-01

    This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.

  2. High-order sliding-mode control for blood glucose regulation in the presence of uncertain dynamics.

    PubMed

    Hernández, Ana Gabriela Gallardo; Fridman, Leonid; Leder, Ron; Andrade, Sergio Islas; Monsalve, Cristina Revilla; Shtessel, Yuri; Levant, Arie

    2011-01-01

    The success of blood glucose automatic regulation depends on the robustness of the control algorithm used. It is a difficult task to perform due to the complexity of the glucose-insulin regulation system. The variety of model existing reflects the great amount of phenomena involved in the process, and the inter-patient variability of the parameters represent another challenge. In this research a High-Order Sliding-Mode Control is proposed. It is applied to two well known models, Bergman Minimal Model, and Sorensen Model, to test its robustness with respect to uncertain dynamics, and patients' parameter variability. The controller designed based on the simulations is tested with the specific Bergman Minimal Model of a diabetic patient whose parameters were identified from an in vivo assay. To minimize the insulin infusion rate, and avoid the hypoglycemia risk, the glucose target is a dynamical profile.

  3. An advanced robust method for speed control of switched reluctance motor

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Ming, Zhengfeng; Su, Zhanping; Cai, Zhuang

    2018-05-01

    This paper presents an advanced robust controller for the speed system of a switched reluctance motor (SRM) in the presence of nonlinearities, speed ripple, and external disturbances. It proposes that the adaptive fuzzy control is applied to regulate the motor speed in the outer loop, and the detector is used to obtain rotor detection in the inner loop. The new fuzzy logic tuning rules are achieved from the experience of the operator and the knowledge of the specialist. The fuzzy parameters are automatically adjusted online according to the error and its change of speed in the transient period. The designed detector can obtain the rotor's position accurately in each phase module. Furthermore, a series of contrastive simulations are completed between the proposed controller and proportion integration differentiation controller including low speed, medium speed, and high speed. Simulations show that the proposed robust controller enables the system reduced by at least 3% in overshoot, 6% in rise time, and 20% in setting time, respectively, and especially under external disturbances. Moreover, an actual SRM control system is constructed at 220 V 370 W. The experiment results further prove that the proposed robust controller has excellent dynamic performance and strong robustness.

  4. Towards Automatically Detecting Whether Student Learning Is Shallow

    ERIC Educational Resources Information Center

    Gowda, Sujith M.; Baker, Ryan S.; Corbett, Albert T.; Rossi, Lisa M.

    2013-01-01

    Recent research has extended student modeling to infer not just whether a student knows a skill or set of skills, but also whether the student has achieved robust learning--learning that enables the student to transfer their knowledge and prepares them for future learning (PFL). However, a student may fail to have robust learning in two fashions:…

  5. Automatic processing of high-rate, high-density multibeam echosounder data

    NASA Astrophysics Data System (ADS)

    Calder, B. R.; Mayer, L. A.

    2003-06-01

    Multibeam echosounders (MBES) are currently the best way to determine the bathymetry of large regions of the seabed with high accuracy. They are becoming the standard instrument for hydrographic surveying and are also used in geological studies, mineral exploration and scientific investigation of the earth's crustal deformations and life cycle. The significantly increased data density provided by an MBES has significant advantages in accurately delineating the morphology of the seabed, but comes with the attendant disadvantage of having to handle and process a much greater volume of data. Current data processing approaches typically involve (computer aided) human inspection of all data, with time-consuming and subjective assessment of all data points. As data rates increase with each new generation of instrument and required turn-around times decrease, manual approaches become unwieldy and automatic methods of processing essential. We propose a new method for automatically processing MBES data that attempts to address concerns of efficiency, objectivity, robustness and accuracy. The method attributes each sounding with an estimate of vertical and horizontal error, and then uses a model of information propagation to transfer information about the depth from each sounding to its local neighborhood. Embedded in the survey area are estimation nodes that aim to determine the true depth at an absolutely defined location, along with its associated uncertainty. As soon as soundings are made available, the nodes independently assimilate propagated information to form depth hypotheses which are then tracked and updated on-line as more data is gathered. Consequently, we can extract at any time a "current-best" estimate for all nodes, plus co-located uncertainties and other metrics. The method can assimilate data from multiple surveys, multiple instruments or repeated passes of the same instrument in real-time as data is being gathered. The data assimilation scheme is sufficiently robust to deal with typical survey echosounder errors. Robustness is improved by pre-conditioning the data, and allowing the depth model to be incrementally defined. A model monitoring scheme ensures that inconsistent data are maintained as separate but internally consistent depth hypotheses. A disambiguation of these competing hypotheses is only carried out when required by the user. The algorithm has a low memory footprint, runs faster than data can currently be gathered, and is suitable for real-time use. We call this algorithm CUBE (Combined Uncertainty and Bathymetry Estimator). We illustrate CUBE on two data sets gathered in shallow water with different instruments and for different purposes. We show that the algorithm is robust to even gross failure modes, and reliably processes the vast majority of the data. In both cases, we confirm that the estimates made by CUBE are statistically similar to those generated by hand.

  6. Kevlar based nanofibrous particles as robust, effective and recyclable absorbents for water purification.

    PubMed

    Nie, Chuanxiong; Peng, Zihang; Yang, Ye; Cheng, Chong; Ma, Lang; Zhao, Changsheng

    2016-11-15

    Developing robust and recyclable absorbents for water purification is of great demand to control water pollution and to provide sustainable water resources. Herein, for the first time, we reported the fabrication of Kevlar nanofiber (KNF) based composite particles for water purification. Both the KNF and KNF-carbon nanotube composite particles can be produced in large-scale by automatic injection of casting solution into ethanol. The resulted nanofibrous particles showed high adsorption capacities towards various pollutants, including metal ions, phenylic compounds and various dyes. Meanwhile, the adsorption process towards dyes was found to fit well with the pseudo-second-order model, while the adsorption speed was controlled by intraparticle diffusion. Furthermore, the adsorption capacities of the nanofibrous particles could be easily recovered by washing with ethanol. In general, the KNF based particles integrate the advantages of easy production, robust and effective adsorption performances, as well as good recyclability, which can be used as robust absorbents to remove toxic molecules and forward the application of absorbents in water purification. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  8. Automatic computation of 2D cardiac measurements from B-mode echocardiography

    NASA Astrophysics Data System (ADS)

    Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin

    2012-03-01

    We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.

  9. A VxD-based automatic blending system using multithreaded programming.

    PubMed

    Wang, L; Jiang, X; Chen, Y; Tan, K C

    2004-01-01

    This paper discusses the object-oriented software design for an automatic blending system. By combining the advantages of a programmable logic controller (PLC) and an industrial control PC (ICPC), an automatic blending control system is developed for a chemical plant. The system structure and multithread-based communication approach are first presented in this paper. The overall software design issues, such as system requirements and functionalities, are then discussed in detail. Furthermore, by replacing the conventional dynamic link library (DLL) with virtual X device drivers (VxD's), a practical and cost-effective solution is provided to improve the robustness of the Windows platform-based automatic blending system in small- and medium-sized plants.

  10. Experimental circular quantum secret sharing over telecom fiber network.

    PubMed

    Wei, Ke-Jin; Ma, Hai-Qiang; Yang, Jian-Hui

    2013-07-15

    We present a robust single photon circular quantum secret sharing (QSS) scheme with phase encoding over 50 km single mode fiber network using a circular QSS protocol. Our scheme can automatically provide a perfect compensation of birefringence and remain stable for a long time. A high visibility of 99.3% is obtained. Furthermore, our scheme realizes a polarization insensitive phase modulators. The visibility of this system can be maintained perpetually without any adjustment to the system every time we test the system.

  11. Automatic segmentation of vessels in in-vivo ultrasound scans

    NASA Astrophysics Data System (ADS)

    Tamimi-Sarnikowski, Philip; Brink-Kjær, Andreas; Moshavegh, Ramin; Arendt Jensen, Jørgen

    2017-03-01

    Ultrasound has become highly popular to monitor atherosclerosis, by scanning the carotid artery. The screening involves measuring the thickness of the vessel wall and diameter of the lumen. An automatic segmentation of the vessel lumen, can enable the determination of lumen diameter. This paper presents a fully automatic segmentation algorithm, for robustly segmenting the vessel lumen in longitudinal B-mode ultrasound images. The automatic segmentation is performed using a combination of B-mode and power Doppler images. The proposed algorithm includes a series of preprocessing steps, and performs a vessel segmentation by use of the marker-controlled watershed transform. The ultrasound images used in the study were acquired using the bk3000 ultrasound scanner (BK Ultrasound, Herlev, Denmark) with two transducers "8L2 Linear" and "10L2w Wide Linear" (BK Ultrasound, Herlev, Denmark). The algorithm was evaluated empirically and applied to a dataset of in-vivo 1770 images recorded from 8 healthy subjects. The segmentation results were compared to manual delineation performed by two experienced users. The results showed a sensitivity and specificity of 90.41+/-11.2 % and 97.93+/-5.7% (mean+/-standard deviation), respectively. The amount of overlap of segmentation and manual segmentation, was measured by the Dice similarity coefficient, which was 91.25+/-11.6%. The empirical results demonstrated the feasibility of segmenting the vessel lumen in ultrasound scans using a fully automatic algorithm.

  12. An automatic method to detect and track the glottal gap from high speed videoendoscopic images.

    PubMed

    Andrade-Miranda, Gustavo; Godino-Llorente, Juan I; Moro-Velázquez, Laureano; Gómez-García, Jorge Andrés

    2015-10-29

    The image-based analysis of the vocal folds vibration plays an important role in the diagnosis of voice disorders. The analysis is based not only on the direct observation of the video sequences, but also in an objective characterization of the phonation process by means of features extracted from the recorded images. However, such analysis is based on a previous accurate identification of the glottal gap, which is the most challenging step for a further automatic assessment of the vocal folds vibration. In this work, a complete framework to automatically segment and track the glottal area (or glottal gap) is proposed. The algorithm identifies a region of interest that is adapted along time, and combine active contours and watershed transform for the final delineation of the glottis and also an automatic procedure for synthesize different videokymograms is proposed. Thanks to the ROI implementation, our technique is robust to the camera shifting and also the objective test proved the effectiveness and performance of the approach in the most challenging scenarios that it is when exist an inappropriate closure of the vocal folds. The novelties of the proposed algorithm relies on the used of temporal information for identify an adaptive ROI and the use of watershed merging combined with active contours for the glottis delimitation. Additionally, an automatic procedure for synthesize multiline VKG by the identification of the glottal main axis is developed.

  13. Computer-operated analytical platform for the determination of nutrients in hydroponic systems.

    PubMed

    Rius-Ruiz, F Xavier; Andrade, Francisco J; Riu, Jordi; Rius, F Xavier

    2014-03-15

    Hydroponics is a water, energy, space, and cost efficient system for growing plants in constrained spaces or land exhausted areas. Precise control of hydroponic nutrients is essential for growing healthy plants and producing high yields. In this article we report for the first time on a new computer-operated analytical platform which can be readily used for the determination of essential nutrients in hydroponic growing systems. The liquid-handling system uses inexpensive components (i.e., peristaltic pump and solenoid valves), which are discretely computer-operated to automatically condition, calibrate and clean a multi-probe of solid-contact ion-selective electrodes (ISEs). These ISEs, which are based on carbon nanotubes, offer high portability, robustness and easy maintenance and storage. With this new computer-operated analytical platform we performed automatic measurements of K(+), Ca(2+), NO3(-) and Cl(-) during tomato plants growth in order to assure optimal nutritional uptake and tomato production. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Determination of rifampicin in human plasma by high-performance liquid chromatography coupled with ultraviolet detection after automatized solid-liquid extraction.

    PubMed

    Louveau, B; Fernandez, C; Zahr, N; Sauvageon-Martre, H; Maslanka, P; Faure, P; Mourah, S; Goldwirt, L

    2016-12-01

    A precise and accurate high-performance liquid chromatography (HPLC) quantification method of rifampicin in human plasma was developed and validated using ultraviolet detection after an automatized solid-phase extraction. The method was validated with respect to selectivity, extraction recovery, linearity, intra- and inter-day precision, accuracy, lower limit of quantification and stability. Chromatographic separation was performed on a Chromolith RP 8 column using a mixture of 0.05 m acetate buffer pH 5.7-acetonitrile (35:65, v/v) as mobile phase. The compounds were detected at a wavelength of 335 nm with a lower limit of quantification of 0.05 mg/L in human plasma. Retention times for rifampicin and 6,7-dimethyl-2,3-di(2-pyridyl) quinoxaline used as internal standard were respectively 3.77 and 4.81 min. This robust and exact method was successfully applied in routine for therapeutic drug monitoring in patients treated with rifampicin. Copyright © 2016 John Wiley & Sons, Ltd.

  15. An automatic multigrid method for the solution of sparse linear systems

    NASA Technical Reports Server (NTRS)

    Shapira, Yair; Israeli, Moshe; Sidi, Avram

    1993-01-01

    An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.

  16. A software architecture for hard real-time execution of automatically synthesized plans or control laws

    NASA Technical Reports Server (NTRS)

    Schoppers, Marcel

    1994-01-01

    The design of a flexible, real-time software architecture for trajectory planning and automatic control of redundant manipulators is described. Emphasis is placed on a technique of designing control systems that are both flexible and robust yet have good real-time performance. The solution presented involves an artificial intelligence algorithm that dynamically reprograms the real-time control system while planning system behavior.

  17. Automatic selection of arterial input function using tri-exponential models

    NASA Astrophysics Data System (ADS)

    Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David

    2009-02-01

    Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.

  18. Automatic segmentation of left ventricle in cardiac cine MRI images based on deep learning

    NASA Astrophysics Data System (ADS)

    Zhou, Tian; Icke, Ilknur; Dogdas, Belma; Parimal, Sarayu; Sampath, Smita; Forbes, Joseph; Bagchi, Ansuman; Chin, Chih-Liang; Chen, Antong

    2017-02-01

    In developing treatment of cardiovascular diseases, short axis cine MRI has been used as a standard technique for understanding the global structural and functional characteristics of the heart, e.g. ventricle dimensions, stroke volume and ejection fraction. To conduct an accurate assessment, heart structures need to be segmented from the cine MRI images with high precision, which could be a laborious task when performed manually. Herein a fully automatic framework is proposed for the segmentation of the left ventricle from the slices of short axis cine MRI scans of porcine subjects using a deep learning approach. For training the deep learning models, which generally requires a large set of data, a public database of human cine MRI scans is used. Experiments on the 3150 cine slices of 7 porcine subjects have shown that when comparing the automatic and manual segmentations the mean slice-wise Dice coefficient is about 0.930, the point-to-curve error is 1.07 mm, and the mean slice-wise Hausdorff distance is around 3.70 mm, which demonstrates the accuracy and robustness of the proposed inter-species translational approach.

  19. Automated quadrilateral surface discretization method and apparatus usable to generate mesh in a finite element analysis system

    DOEpatents

    Blacker, Teddy D.

    1994-01-01

    An automatic quadrilateral surface discretization method and apparatus is provided for automatically discretizing a geometric region without decomposing the region. The automated quadrilateral surface discretization method and apparatus automatically generates a mesh of all quadrilateral elements which is particularly useful in finite element analysis. The generated mesh of all quadrilateral elements is boundary sensitive, orientation insensitive and has few irregular nodes on the boundary. A permanent boundary of the geometric region is input and rows are iteratively layered toward the interior of the geometric region. Also, an exterior permanent boundary and an interior permanent boundary for a geometric region may be input and the rows are iteratively layered inward from the exterior boundary in a first counter clockwise direction while the rows are iteratively layered from the interior permanent boundary toward the exterior of the region in a second clockwise direction. As a result, a high quality mesh for an arbitrary geometry may be generated with a technique that is robust and fast for complex geometric regions and extreme mesh gradations.

  20. Online automatic tuning and control for fed-batch cultivation

    PubMed Central

    van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.

    2007-01-01

    Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554

  1. Automatic SAR/optical cross-matching for GCP monograph generation

    NASA Astrophysics Data System (ADS)

    Nutricato, Raffaele; Morea, Alberto; Nitti, Davide Oscar; La Mantia, Claudio; Agrimano, Luigi; Samarelli, Sergio; Chiaradia, Maria Teresa

    2016-10-01

    Ground Control Points (GCP), automatically extracted from Synthetic Aperture Radar (SAR) images through 3D stereo analysis, can be effectively exploited for an automatic orthorectification of optical imagery if they can be robustly located in the basic optical images. The present study outlines a SAR/Optical cross-matching procedure that allows a robust alignment of radar and optical images, and consequently to derive automatically the corresponding sub-pixel position of the GCPs in the optical image in input, expressed as fractional pixel/line image coordinates. The cross-matching in performed in two subsequent steps, in order to gradually gather a better precision. The first step is based on the Mutual Information (MI) maximization between optical and SAR chips while the last one uses the Normalized Cross-Correlation as similarity metric. This work outlines the designed algorithmic solution and discusses the results derived over the urban area of Pisa (Italy), where more than ten COSMO-SkyMed Enhanced Spotlight stereo images with different beams and passes are available. The experimental analysis involves different satellite images, in order to evaluate the performances of the algorithm w.r.t. the optical spatial resolution. An assessment of the performances of the algorithm has been carried out, and errors are computed by measuring the distance between the GCP pixel/line position in the optical image, automatically estimated by the tool, and the "true" position of the GCP, visually identified by an expert user in the optical images.

  2. Object Detection from MMS Imagery Using Deep Learning for Generation of Road Orthophotos

    NASA Astrophysics Data System (ADS)

    Li, Y.; Sakamoto, M.; Shinohara, T.; Satoh, T.

    2018-05-01

    In recent years, extensive research has been conducted to automatically generate high-accuracy and high-precision road orthophotos using images and laser point cloud data acquired from a mobile mapping system (MMS). However, it is necessary to mask out non-road objects such as vehicles, bicycles, pedestrians and their shadows in MMS images in order to eliminate erroneous textures from the road orthophoto. Hence, we proposed a novel vehicle and its shadow detection model based on Faster R-CNN for automatically and accurately detecting the regions of vehicles and their shadows from MMS images. The experimental results show that the maximum recall of the proposed model was high - 0.963 (intersection-over-union > 0.7) - and the model could identify the regions of vehicles and their shadows accurately and robustly from MMS images, even when they contain varied vehicles, different shadow directions, and partial occlusions. Furthermore, it was confirmed that the quality of road orthophoto generated using vehicle and its shadow masks was significantly improved as compared to those generated using no masks or using vehicle masks only.

  3. Nonlinear Krylov and moving nodes in the method of lines

    NASA Astrophysics Data System (ADS)

    Miller, Keith

    2005-11-01

    We report on some successes and problem areas in the Method of Lines from our work with moving node finite element methods. First, we report on our "nonlinear Krylov accelerator" for the modified Newton's method on the nonlinear equations of our stiff ODE solver. Since 1990 it has been robust, simple, cheap, and automatic on all our moving node computations. We publicize further trials with it here because it should be of great general usefulness to all those solving evolutionary equations. Second, we discuss the need for reliable automatic choice of spatially variable time steps. Third, we discuss the need for robust and efficient iterative solvers for the difficult linearized equations (Jx=b) of our stiff ODE solver. Here, the 1997 thesis of Zulu Xaba has made significant progress.

  4. Robust estimation of fetal heart rate from US Doppler signals

    NASA Astrophysics Data System (ADS)

    Voicu, Iulian; Girault, Jean-Marc; Roussel, Catherine; Decock, Aliette; Kouame, Denis

    2010-01-01

    Introduction: In utero, Monitoring of fetal wellbeing or suffering is today an open challenge, due to the high number of clinical parameters to be considered. An automatic monitoring of fetal activity, dedicated for quantifying fetal wellbeing, becomes necessary. For this purpose and in a view to supply an alternative for the Manning test, we used an ultrasound multitransducer multigate Doppler system. One important issue (and first step in our investigation) is the accurate estimation of fetal heart rate (FHR). An estimation of the FHR is obtained by evaluating the autocorrelation function of the Doppler signals for ills and healthiness foetus. However, this estimator is not enough robust since about 20% of FHR are not detected in comparison to a reference system. These non detections are principally due to the fact that the Doppler signal generated by the fetal moving is strongly disturbed by the presence of others several Doppler sources (mother' s moving, pseudo breathing, etc.). By modifying the existing method (autocorrelation method) and by proposing new time and frequency estimators used in the audio' s domain, we reduce to 5% the probability of non-detection of the fetal heart rate. These results are really encouraging and they enable us to plan the use of automatic classification techniques in order to discriminate between healthy and in suffering foetus.

  5. A system for learning statistical motion patterns.

    PubMed

    Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve

    2006-09-01

    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.

  6. Multiple sclerosis lesion segmentation using an automatic multimodal graph cuts.

    PubMed

    García-Lorenzo, Daniel; Lecoeur, Jeremy; Arnold, Douglas L; Collins, D Louis; Barillot, Christian

    2009-01-01

    Graph Cuts have been shown as a powerful interactive segmentation technique in several medical domains. We propose to automate the Graph Cuts in order to automatically segment Multiple Sclerosis (MS) lesions in MRI. We replace the manual interaction with a robust EM-based approach in order to discriminate between MS lesions and the Normal Appearing Brain Tissues (NABT). Evaluation is performed in synthetic and real images showing good agreement between the automatic segmentation and the target segmentation. We compare our algorithm with the state of the art techniques and with several manual segmentations. An advantage of our algorithm over previously published ones is the possibility to semi-automatically improve the segmentation due to the Graph Cuts interactive feature.

  7. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    PubMed Central

    Hauschild, Anne-Christin; Kopczynski, Dominik; D’Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-01-01

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME). We manually generated a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors’ results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications. PMID:24957992

  8. Peak detection method evaluation for ion mobility spectrometry by using machine learning approaches.

    PubMed

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-04-16

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors' results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications.

  9. Automatic allograft bone selection through band registration and its application to distal femur.

    PubMed

    Zhang, Yu; Qiu, Lei; Li, Fengzan; Zhang, Qing; Zhang, Li; Niu, Xiaohui

    2017-09-01

    Clinical reports suggest that large bone defects could be effectively restored by allograft bone transplantation, where allograft bone selection acts an important role. Besides, there is a huge demand for developing the automatic allograft bone selection methods, as the automatic methods could greatly improve the management efficiency of the large bone banks. Although several automatic methods have been presented to select the most suitable allograft bone from the massive allograft bone bank, these methods still suffer from inaccuracy. In this paper, we propose an effective allograft bone selection method without using the contralateral bones. Firstly, the allograft bone is globally aligned to the recipient bone by surface registration. Then, the global alignment is further refined through band registration. The band, defined as the recipient points within the lifted and lowered cutting planes, could involve more local structure of the defected segment. Therefore, our method could achieve robust alignment and high registration accuracy of the allograft and recipient. Moreover, the existing contour method and surface method could be unified into one framework under our method by adjusting the lift and lower distances of the cutting planes. Finally, our method has been validated on the database of distal femurs. The experimental results indicate that our method outperforms the surface method and contour method.

  10. Automatic segmentation of the bone and extraction of the bone cartilage interface from magnetic resonance images of the knee

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien

    2007-03-01

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.

  11. DALMATIAN: An Algorithm for Automatic Cell Detection and Counting in 3D.

    PubMed

    Shuvaev, Sergey A; Lazutkin, Alexander A; Kedrov, Alexander V; Anokhin, Konstantin V; Enikolopov, Grigori N; Koulakov, Alexei A

    2017-01-01

    Current 3D imaging methods, including optical projection tomography, light-sheet microscopy, block-face imaging, and serial two photon tomography enable visualization of large samples of biological tissue. Large volumes of data obtained at high resolution require development of automatic image processing techniques, such as algorithms for automatic cell detection or, more generally, point-like object detection. Current approaches to automated cell detection suffer from difficulties originating from detection of particular cell types, cell populations of different brightness, non-uniformly stained, and overlapping cells. In this study, we present a set of algorithms for robust automatic cell detection in 3D. Our algorithms are suitable for, but not limited to, whole brain regions and individual brain sections. We used watershed procedure to split regional maxima representing overlapping cells. We developed a bootstrap Gaussian fit procedure to evaluate the statistical significance of detected cells. We compared cell detection quality of our algorithm and other software using 42 samples, representing 6 staining and imaging techniques. The results provided by our algorithm matched manual expert quantification with signal-to-noise dependent confidence, including samples with cells of different brightness, non-uniformly stained, and overlapping cells for whole brain regions and individual tissue sections. Our algorithm provided the best cell detection quality among tested free and commercial software.

  12. Automatic coronary artery segmentation based on multi-domains remapping and quantile regression in angiographies.

    PubMed

    Li, Zhixun; Zhang, Yingtao; Gong, Huiling; Li, Weimin; Tang, Xianglong

    2016-12-01

    Coronary artery disease has become the most dangerous diseases to human life. And coronary artery segmentation is the basis of computer aided diagnosis and analysis. Existing segmentation methods are difficult to handle the complex vascular texture due to the projective nature in conventional coronary angiography. Due to large amount of data and complex vascular shapes, any manual annotation has become increasingly unrealistic. A fully automatic segmentation method is necessary in clinic practice. In this work, we study a method based on reliable boundaries via multi-domains remapping and robust discrepancy correction via distance balance and quantile regression for automatic coronary artery segmentation of angiography images. The proposed method can not only segment overlapping vascular structures robustly, but also achieve good performance in low contrast regions. The effectiveness of our approach is demonstrated on a variety of coronary blood vessels compared with the existing methods. The overall segmentation performances si, fnvf, fvpf and tpvf were 95.135%, 3.733%, 6.113%, 96.268%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Automatic Screening and Grading of Age-Related Macular Degeneration from Texture Analysis of Fundus Images

    PubMed Central

    Phan, Thanh Vân; Seoud, Lama; Chakor, Hadi; Cheriet, Farida

    2016-01-01

    Age-related macular degeneration (AMD) is a disease which causes visual deficiency and irreversible blindness to the elderly. In this paper, an automatic classification method for AMD is proposed to perform robust and reproducible assessments in a telemedicine context. First, a study was carried out to highlight the most relevant features for AMD characterization based on texture, color, and visual context in fundus images. A support vector machine and a random forest were used to classify images according to the different AMD stages following the AREDS protocol and to evaluate the features' relevance. Experiments were conducted on a database of 279 fundus images coming from a telemedicine platform. The results demonstrate that local binary patterns in multiresolution are the most relevant for AMD classification, regardless of the classifier used. Depending on the classification task, our method achieves promising performances with areas under the ROC curve between 0.739 and 0.874 for screening and between 0.469 and 0.685 for grading. Moreover, the proposed automatic AMD classification system is robust with respect to image quality. PMID:27190636

  14. Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.

    PubMed

    Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2010-11-01

    Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.

  15. A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.

    PubMed

    Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2015-12-01

    Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Repliscan: a tool for classifying replication timing regions.

    PubMed

    Zynda, Gregory J; Song, Jawon; Concia, Lorenzo; Wear, Emily E; Hanley-Bowdoin, Linda; Thompson, William F; Vaughn, Matthew W

    2017-08-07

    Replication timing experiments that use label incorporation and high throughput sequencing produce peaked data similar to ChIP-Seq experiments. However, the differences in experimental design, coverage density, and possible results make traditional ChIP-Seq analysis methods inappropriate for use with replication timing. To accurately detect and classify regions of replication across the genome, we present Repliscan. Repliscan robustly normalizes, automatically removes outlying and uninformative data points, and classifies Repli-seq signals into discrete combinations of replication signatures. The quality control steps and self-fitting methods make Repliscan generally applicable and more robust than previous methods that classify regions based on thresholds. Repliscan is simple and effective to use on organisms with different genome sizes. Even with analysis window sizes as small as 1 kilobase, reliable profiles can be generated with as little as 2.4x coverage.

  17. Automatic segmentation and classification of mycobacterium tuberculosis with conventional light microscopy

    NASA Astrophysics Data System (ADS)

    Xu, Chao; Zhou, Dongxiang; Zhai, Yongping; Liu, Yunhui

    2015-12-01

    This paper realizes the automatic segmentation and classification of Mycobacterium tuberculosis with conventional light microscopy. First, the candidate bacillus objects are segmented by the marker-based watershed transform. The markers are obtained by an adaptive threshold segmentation based on the adaptive scale Gaussian filter. The scale of the Gaussian filter is determined according to the color model of the bacillus objects. Then the candidate objects are extracted integrally after region merging and contaminations elimination. Second, the shape features of the bacillus objects are characterized by the Hu moments, compactness, eccentricity, and roughness, which are used to classify the single, touching and non-bacillus objects. We evaluated the logistic regression, random forest, and intersection kernel support vector machines classifiers in classifying the bacillus objects respectively. Experimental results demonstrate that the proposed method yields to high robustness and accuracy. The logistic regression classifier performs best with an accuracy of 91.68%.

  18. Robust automatic line scratch detection in films.

    PubMed

    Newson, Alasdair; Almansa, Andrés; Gousseau, Yann; Pérez, Patrick

    2014-03-01

    Line scratch detection in old films is a particularly challenging problem due to the variable spatiotemporal characteristics of this defect. Some of the main problems include sensitivity to noise and texture, and false detections due to thin vertical structures belonging to the scene. We propose a robust and automatic algorithm for frame-by-frame line scratch detection in old films, as well as a temporal algorithm for the filtering of false detections. In the frame-by-frame algorithm, we relax some of the hypotheses used in previous algorithms in order to detect a wider variety of scratches. This step's robustness and lack of external parameters is ensured by the combined use of an a contrario methodology and local statistical estimation. In this manner, over-detection in textured or cluttered areas is greatly reduced. The temporal filtering algorithm eliminates false detections due to thin vertical structures by exploiting the coherence of their motion with that of the underlying scene. Experiments demonstrate the ability of the resulting detection procedure to deal with difficult situations, in particular in the presence of noise, texture, and slanted or partial scratches. Comparisons show significant advantages over previous work.

  19. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH.

    PubMed

    Volk, Jochen; Herrmann, Torsten; Wüthrich, Kurt

    2008-07-01

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.

  20. Hierarchically Structured Non-Intrusive Sign Language Recognition. Chapter 2

    NASA Technical Reports Server (NTRS)

    Zieren, Jorg; Zieren, Jorg; Kraiss, Karl-Friedrich

    2007-01-01

    This work presents a hierarchically structured approach at the nonintrusive recognition of sign language from a monocular frontal view. Robustness is achieved through sophisticated localization and tracking methods, including a combined EM/CAMSHIFT overlap resolution procedure and the parallel pursuit of multiple hypotheses about hands position and movement. This allows handling of ambiguities and automatically corrects tracking errors. A biomechanical skeleton model and dynamic motion prediction using Kalman filters represents high level knowledge. Classification is performed by Hidden Markov Models. 152 signs from German sign language were recognized with an accuracy of 97.6%.

  1. Computing 3-D steady supersonic flow via a new Lagrangian approach

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Liou, M.-S.

    1993-01-01

    The new Lagrangian method introduced by Loh and Hui (1990) is extended for 3-D steady supersonic flow computation. Details of the conservation form, the implementation of the local Riemann solver, and the Godunov and the high resolution TVD schemes are presented. The new approach is robust yet accurate, capable of handling complicated geometry and reactions between discontinuous waves. It keeps all the advantages claimed in the 2-D method of Loh and Hui, e.g., crisp resolution for a slip surface (contact discontinuity) and automatic grid generation along the stream.

  2. Onboard Image Registration from Invariant Features

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Ng, Justin; Garay, Michael J.; Burl, Michael C

    2008-01-01

    This paper describes a feature-based image registration technique that is potentially well-suited for onboard deployment. The overall goal is to provide a fast, robust method for dynamically combining observations from multiple platforms into sensors webs that respond quickly to short-lived events and provide rich observations of objects that evolve in space and time. The approach, which has enjoyed considerable success in mainstream computer vision applications, uses invariant SIFT descriptors extracted at image interest points together with the RANSAC algorithm to robustly estimate transformation parameters that relate one image to another. Experimental results for two satellite image registration tasks are presented: (1) automatic registration of images from the MODIS instrument on Terra to the MODIS instrument on Aqua and (2) automatic stabilization of a multi-day sequence of GOES-West images collected during the October 2007 Southern California wildfires.

  3. Automatic and integrated micro-enzyme assay (AIμEA) platform for highly sensitive thrombin analysis via an engineered fluorescence protein-functionalized monolithic capillary column.

    PubMed

    Lin, Lihua; Liu, Shengquan; Nie, Zhou; Chen, Yingzhuang; Lei, Chunyang; Wang, Zhen; Yin, Chao; Hu, Huiping; Huang, Yan; Yao, Shouzhuo

    2015-04-21

    Nowadays, large-scale screening for enzyme discovery, engineering, and drug discovery processes require simple, fast, and sensitive enzyme activity assay platforms with high integration and potential for high-throughput detection. Herein, a novel automatic and integrated micro-enzyme assay (AIμEA) platform was proposed based on a unique microreaction system fabricated by a engineered green fluorescence protein (GFP)-functionalized monolithic capillary column, with thrombin as an example. The recombinant GFP probe was rationally engineered to possess a His-tag and a substrate sequence of thrombin, which enable it to be immobilized on the monolith via metal affinity binding, and to be released after thrombin digestion. Combined with capillary electrophoresis-laser-induced fluorescence (CE-LIF), all the procedures, including thrombin injection, online enzymatic digestion in the microreaction system, and label-free detection of the released GFP, were integrated in a single electrophoretic process. By taking advantage of the ultrahigh loading capacity of the AIμEA platform and the CE automatic programming setup, one microreaction column was sufficient for many times digestion without replacement. The novel microreaction system showed significantly enhanced catalytic efficiency, about 30 fold higher than that of the equivalent bulk reaction. Accordingly, the AIμEA platform was highly sensitive with a limit of detection down to 1 pM of thrombin. Moreover, the AIμEA platform was robust and reliable to detect thrombin in human serum samples and its inhibition by hirudin. Hence, this AIμEA platform exhibits great potential for high-throughput analysis in future biological application, disease diagnostics, and drug screening.

  4. Development of a Highly Automated and Multiplexed Targeted Proteome Pipeline and Assay for 112 Rat Brain Synaptic Proteins

    PubMed Central

    Colangelo, Christopher M.; Ivosev, Gordana; Chung, Lisa; Abbott, Thomas; Shifman, Mark; Sakaue, Fumika; Cox, David; Kitchen, Rob R.; Burton, Lyle; Tate, Stephen A; Gulcicek, Erol; Bonner, Ron; Rinehart, Jesse; Nairn, Angus C.; Williams, Kenneth R.

    2015-01-01

    We present a comprehensive workflow for large scale (>1000 transitions/run) label-free LC-MRM proteome assays. Innovations include automated MRM transition selection, intelligent retention time scheduling (xMRM) that improves Signal/Noise by >2-fold, and automatic peak modeling. Improvements to data analysis include a novel Q/C metric, Normalized Group Area Ratio (NGAR), MLR normalization, weighted regression analysis, and data dissemination through the Yale Protein Expression Database. As a proof of principle we developed a robust 90 minute LC-MRM assay for Mouse/Rat Post-Synaptic Density (PSD) fractions which resulted in the routine quantification of 337 peptides from 112 proteins based on 15 observations per protein. Parallel analyses with stable isotope dilution peptide standards (SIS), demonstrate very high correlation in retention time (1.0) and protein fold change (0.94) between the label-free and SIS analyses. Overall, our first method achieved a technical CV of 11.4% with >97.5% of the 1697 transitions being quantified without user intervention, resulting in a highly efficient, robust, and single injection LC-MRM assay. PMID:25476245

  5. Feasibility and robustness of dose painting by numbers in proton therapy with contour-driven plan optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barragán, A. M., E-mail: ana.barragan@uclouvain.be; Differding, S.; Lee, J. A.

    Purpose: To prove the ability of protons to reproduce a dose gradient that matches a dose painting by numbers (DPBN) prescription in the presence of setup and range errors, by using contours and structure-based optimization in a commercial treatment planning system. Methods: For two patients with head and neck cancer, voxel-by-voxel prescription to the target volume (GTV{sub PET}) was calculated from {sup 18}FDG-PET images and approximated with several discrete prescription subcontours. Treatments were planned with proton pencil beam scanning. In order to determine the optimal plan parameters to approach the DPBN prescription, the effects of the scanning pattern, number ofmore » fields, number of subcontours, and use of range shifter were separately tested on each patient. Different constant scanning grids (i.e., spot spacing = Δx = Δy = 3.5, 4, and 5 mm) and uniform energy layer separation [4 and 5 mm WED (water equivalent distance)] were analyzed versus a dynamic and automatic selection of the spots grid. The number of subcontours was increased from 3 to 11 while the number of beams was set to 3, 5, or 7. Conventional PTV-based and robust clinical target volumes (CTV)-based optimization strategies were considered and their robustness against range and setup errors assessed. Because of the nonuniform prescription, ensuring robustness for coverage of GTV{sub PET} inevitably leads to overdosing, which was compared for both optimization schemes. Results: The optimal number of subcontours ranged from 5 to 7 for both patients. All considered scanning grids achieved accurate dose painting (1% average difference between the prescribed and planned doses). PTV-based plans led to nonrobust target coverage while robust-optimized plans improved it considerably (differences between worst-case CTV dose and the clinical constraint was up to 3 Gy for PTV-based plans and did not exceed 1 Gy for robust CTV-based plans). Also, only 15% of the points in the GTV{sub PET} (worst case) were above 5% of DPBN prescription for robust-optimized plans, while they were more than 50% for PTV plans. Low dose to organs at risk (OARs) could be achieved for both PTV and robust-optimized plans. Conclusions: DPBN in proton therapy is feasible with the use of a sufficient number subcontours, automatically generated scanning patterns, and no more than three beams are needed. Robust optimization ensured the required target coverage and minimal overdosing, while PTV-approach led to nonrobust plans with excessive overdose. Low dose to OARs can be achieved even in the presence of a high-dose escalation as in DPBN.« less

  6. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery.

    PubMed

    Ketcha, M D; De Silva, T; Uneri, A; Kleinszig, G; Vogt, S; Wolinsky, J-P; Siewerdsen, J H

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  7. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  8. Robust Arm and Hand Tracking by Unsupervised Context Learning

    PubMed Central

    Spruyt, Vincent; Ledda, Alessandro; Philips, Wilfried

    2014-01-01

    Hand tracking in video is an increasingly popular research field due to the rise of novel human-computer interaction methods. However, robust and real-time hand tracking in unconstrained environments remains a challenging task due to the high number of degrees of freedom and the non-rigid character of the human hand. In this paper, we propose an unsupervised method to automatically learn the context in which a hand is embedded. This context includes the arm and any other object that coherently moves along with the hand. We introduce two novel methods to incorporate this context information into a probabilistic tracking framework, and introduce a simple yet effective solution to estimate the position of the arm. Finally, we show that our method greatly increases robustness against occlusion and cluttered background, without degrading tracking performance if no contextual information is available. The proposed real-time algorithm is shown to outperform the current state-of-the-art by evaluating it on three publicly available video datasets. Furthermore, a novel dataset is created and made publicly available for the research community. PMID:25004155

  9. Modulation of Visually Evoked Postural Responses by Contextual Visual, Haptic and Auditory Information: A ‘Virtual Reality Check’

    PubMed Central

    Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760

  10. Automatic Coregistration for Multiview SAR Images in Urban Areas

    NASA Astrophysics Data System (ADS)

    Xiang, Y.; Kang, W.; Wang, F.; You, H.

    2017-09-01

    Due to the high resolution property and the side-looking mechanism of SAR sensors, complex buildings structures make the registration of SAR images in urban areas becomes very hard. In order to solve the problem, an automatic and robust coregistration approach for multiview high resolution SAR images is proposed in the paper, which consists of three main modules. First, both the reference image and the sensed image are segmented into two parts, urban areas and nonurban areas. Urban areas caused by double or multiple scattering in a SAR image have a tendency to show higher local mean and local variance values compared with general homogeneous regions due to the complex structural information. Based on this criterion, building areas are extracted. After obtaining the target regions, L-shape structures are detected using the SAR phase congruency model and Hough transform. The double bounce scatterings formed by wall and ground are shown as strong L- or T-shapes, which are usually taken as the most reliable indicator for building detection. According to the assumption that buildings are rectangular and flat models, planimetric buildings are delineated using the L-shapes, then the reconstructed target areas are obtained. For the orignal areas and the reconstructed target areas, the SAR-SIFT matching algorithm is implemented. Finally, correct corresponding points are extracted by the fast sample consensus (FSC) and the transformation model is also derived. The experimental results on a pair of multiview TerraSAR images with 1-m resolution show that the proposed approach gives a robust and precise registration performance, compared with the orignal SAR-SIFT method.

  11. Automatic generation of smart earthquake-resistant building system: Hybrid system of base-isolation and building-connection.

    PubMed

    Kasagi, M; Fujita, K; Tsuji, M; Takewaki, I

    2016-02-01

    A base-isolated building may sometimes exhibit an undesirable large response to a long-duration, long-period earthquake ground motion and a connected building system without base-isolation may show a large response to a near-fault (rather high-frequency) earthquake ground motion. To overcome both deficiencies, a new hybrid control system of base-isolation and building-connection is proposed and investigated. In this new hybrid building system, a base-isolated building is connected to a stiffer free wall with oil dampers. It has been demonstrated in a preliminary research that the proposed hybrid system is effective both for near-fault (rather high-frequency) and long-duration, long-period earthquake ground motions and has sufficient redundancy and robustness for a broad range of earthquake ground motions.An automatic generation algorithm of this kind of smart structures of base-isolation and building-connection hybrid systems is presented in this paper. It is shown that, while the proposed algorithm does not work well in a building without the connecting-damper system, it works well in the proposed smart hybrid system with the connecting damper system.

  12. Towards the automatic detection and analysis of sunspot rotation

    NASA Astrophysics Data System (ADS)

    Brown, Daniel S.; Walker, Andrew P.

    2016-10-01

    Torsional rotation of sunspots have been noted by many authors over the past century. Sunspots have been observed to rotate up to the order of 200 degrees over 8-10 days, and these have often been linked with eruptive behaviour such as solar flares and coronal mass ejections. However, most studies in the literature are case studies or small-number studies which suffer from selection bias. In order to better understand sunspot rotation and its impact on the corona, unbiased large-sample statistical studies are required (including both rotating and non-rotating sunspots). While this can be done manually, a better approach is to automate the detection and analysis of rotating sunspots using robust methods with well characterised uncertainties. The SDO/HMI instrument provide long-duration, high-resolution and high-cadence continuum observations suitable for extracting a large number of examples of rotating sunspots. This presentation will outline the analysis of SDI/HMI data to determine the rotation (and non-rotation) profiles of sunspots for the complete duration of their transit across the solar disk, along with how this can be extended to automatically identify sunspots and initiate their analysis.

  13. Phantom study and accuracy evaluation of an image-to-world registration approach used with electro-magnetic tracking system for neurosurgery

    NASA Astrophysics Data System (ADS)

    Li, Senhu; Sarment, David

    2015-12-01

    Minimally invasive neurosurgery needs intraoperative imaging updates and high efficient image guide system to facilitate the procedure. An automatic image guided system utilized with a compact and mobile intraoperative CT imager was introduced in this work. A tracking frame that can be easily attached onto the commercially available skull clamp was designed. With known geometry of fiducial and tracking sensor arranged on this rigid frame that was fabricated through high precision 3D printing, not only was an accurate, fully automatic registration method developed in a simple and less-costly approach, but also it helped in estimating the errors from fiducial localization in image space through image processing, and in patient space through the calibration of tracking frame. Our phantom study shows the fiducial registration error as 0.348+/-0.028mm, comparing the manual registration error as 1.976+/-0.778mm. The system in this study provided a robust and accurate image-to-patient registration without interruption of routine surgical workflow and any user interactions involved through the neurosurgery.

  14. Image-Based 3D Face Modeling System

    NASA Astrophysics Data System (ADS)

    Park, In Kyu; Zhang, Hui; Vezhnevets, Vladimir

    2005-12-01

    This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2[InlineEquation not available: see fulltext.]3 minutes.

  15. Investigation on the separability of slums by multi-aspect TerraSAR-X dual-co-polarized high resolution spotlight images based on the multi-scale evaluation of local distributions

    NASA Astrophysics Data System (ADS)

    Schmitt, Andreas; Sieg, Tobias; Wurm, Michael; Taubenböck, Hannes

    2018-02-01

    Following recent advances in distinguishing settlements vs. non-settlement areas from latest SAR data, the question arises whether a further automatic intra-urban delineation and characterization of different structural types is possible. This paper studies the appearance of the structural type ;slums; in high resolution SAR images. Geocoded Kennaugh elements are used as backscatter information and Schmittlet indices as descriptor of local texture. Three cities with a significant share of slums (Cape Town, Manila, Mumbai) are chosen as test sites. These are imaged by TerraSAR-X in the dual-co-polarized high resolution spotlight mode in any available aspect angle. Representative distributions are estimated and fused by a robust approach. Our observations identify a high similarity of slums throughout all three test sites. The derived similarity maps are validated with reference data sets from visual interpretation and ground truth. The final validation strategy is based on completeness and correctness versus other classes in relation to the similarity. High accuracies (up to 87%) in identifying morphologic slums are reached for Cape Town. For Manila (up to 60%) and Mumbai (up to 54%), the distinction is more difficult due to their complex structural configuration. Concluding, high resolution SAR data can be suitable to automatically trace potential locations of slums. Polarimetric information and the incidence angle seem to have a negligible impact on the results whereas the intensity patterns and the passing direction of the satellite are playing a key role. Hence, the combination of intensity images (brightness) acquired from ascending and descending orbits together with Schmittlet indices (spatial pattern) promises best results. The transfer from the automatically recognized physical similarity to the semantic interpretation remains challenging.

  16. Automatic 3D reconstruction of electrophysiology catheters from two-view monoplane C-arm image sequences.

    PubMed

    Baur, Christoph; Milletari, Fausto; Belagiannis, Vasileios; Navab, Nassir; Fallavollita, Pascal

    2016-07-01

    Catheter guidance is a vital task for the success of electrophysiology interventions. It is usually provided through fluoroscopic images that are taken intra-operatively. The cardiologists, who are typically equipped with C-arm systems, scan the patient from multiple views rotating the fluoroscope around one of its axes. The resulting sequences allow the cardiologists to build a mental model of the 3D position of the catheters and interest points from the multiple views. We describe and compare different 3D catheter reconstruction strategies and ultimately propose a novel and robust method for the automatic reconstruction of 3D catheters in non-synchronized fluoroscopic sequences. This approach does not purely rely on triangulation but incorporates prior knowledge about the catheters. In conjunction with an automatic detection method, we demonstrate the performance of our method compared to ground truth annotations. In our experiments that include 20 biplane datasets, we achieve an average reprojection error of 0.43 mm and an average reconstruction error of 0.67 mm compared to gold standard annotation. In clinical practice, catheters suffer from complex motion due to the combined effect of heartbeat and respiratory motion. As a result, any 3D reconstruction algorithm via triangulation is imprecise. We have proposed a new method that is fully automatic and highly accurate to reconstruct catheters in three dimensions.

  17. Automatic Pedestrian Crossing Detection and Impairment Analysis Based on Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, Y.; Li, Q.

    2017-09-01

    Pedestrian crossing, as an important part of transportation infrastructures, serves to secure pedestrians' lives and possessions and keep traffic flow in order. As a prominent feature in the street scene, detection of pedestrian crossing contributes to 3D road marking reconstruction and diminishing the adverse impact of outliers in 3D street scene reconstruction. Since pedestrian crossing is subject to wearing and tearing from heavy traffic flow, it is of great imperative to monitor its status quo. On this account, an approach of automatic pedestrian crossing detection using images from vehicle-based Mobile Mapping System is put forward and its defilement and impairment are analyzed in this paper. Firstly, pedestrian crossing classifier is trained with low recall rate. Then initial detections are refined by utilizing projection filtering, contour information analysis, and monocular vision. Finally, a pedestrian crossing detection and analysis system with high recall rate, precision and robustness will be achieved. This system works for pedestrian crossing detection under different situations and light conditions. It can recognize defiled and impaired crossings automatically in the meanwhile, which facilitates monitoring and maintenance of traffic facilities, so as to reduce potential traffic safety problems and secure lives and property.

  18. Road Network Extraction from Dsm by Mathematical Morphology and Reasoning

    NASA Astrophysics Data System (ADS)

    Li, Yan; Wu, Jianliang; Zhu, Lin; Tachibana, Kikuo

    2016-06-01

    The objective of this research is the automatic extraction of the road network in a scene of the urban area from a high resolution digital surface model (DSM). Automatic road extraction and modeling from remote sensed data has been studied for more than one decade. The methods vary greatly due to the differences of data types, regions, resolutions et al. An advanced automatic road network extraction scheme is proposed to address the issues of tedium steps on segmentation, recognition and grouping. It is on the basis of a geometric road model which describes a multiple-level structure. The 0-dimension element is intersection. The 1-dimension elements are central line and side. The 2-dimension element is plane, which is generated from the 1-dimension elements. The key feature of the presented approach is the cross validation for the three road elements which goes through the entire procedure of their extraction. The advantage of our model and method is that linear elements of the road can be derived directly, without any complex, non-robust connection hypothesis. An example of Japanese scene is presented to display the procedure and the performance of the approach.

  19. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  20. Advances in Modal Analysis Using a Robust and Multiscale Method

    NASA Astrophysics Data System (ADS)

    Picard, Cécile; Frisson, Christian; Faure, François; Drettakis, George; Kry, Paul G.

    2010-12-01

    This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.

  1. Vision Systems with the Human in the Loop

    NASA Astrophysics Data System (ADS)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  2. Multi-stage learning for robust lung segmentation in challenging CT volumes.

    PubMed

    Sofka, Michal; Wetzl, Jens; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin

    2011-01-01

    Simple algorithms for segmenting healthy lung parenchyma in CT are unable to deal with high density tissue common in pulmonary diseases. To overcome this problem, we propose a multi-stage learning-based approach that combines anatomical information to predict an initialization of a statistical shape model of the lungs. The initialization first detects the carina of the trachea, and uses this to detect a set of automatically selected stable landmarks on regions near the lung (e.g., ribs, spine). These landmarks are used to align the shape model, which is then refined through boundary detection to obtain fine-grained segmentation. Robustness is obtained through hierarchical use of discriminative classifiers that are trained on a range of manually annotated data of diseased and healthy lungs. We demonstrate fast detection (35s per volume on average) and segmentation of 2 mm accuracy on challenging data.

  3. Multi-stream LSTM-HMM decoding and histogram equalization for noise robust keyword spotting.

    PubMed

    Wöllmer, Martin; Marchi, Erik; Squartini, Stefano; Schuller, Björn

    2011-09-01

    Highly spontaneous, conversational, and potentially emotional and noisy speech is known to be a challenge for today's automatic speech recognition (ASR) systems, which highlights the need for advanced algorithms that improve speech features and models. Histogram Equalization is an efficient method to reduce the mismatch between clean and noisy conditions by normalizing all moments of the probability distribution of the feature vector components. In this article, we propose to combine histogram equalization and multi-condition training for robust keyword detection in noisy speech. To better cope with conversational speaking styles, we show how contextual information can be effectively exploited in a multi-stream ASR framework that dynamically models context-sensitive phoneme estimates generated by a long short-term memory neural network. The proposed techniques are evaluated on the SEMAINE database-a corpus containing emotionally colored conversations with a cognitive system for "Sensitive Artificial Listening".

  4. Learning Optimized Local Difference Binaries for Scalable Augmented Reality on Mobile Devices.

    PubMed

    Xin Yang; Kwang-Ting Cheng

    2014-06-01

    The efficiency, robustness and distinctiveness of a feature descriptor are critical to the user experience and scalability of a mobile augmented reality (AR) system. However, existing descriptors are either too computationally expensive to achieve real-time performance on a mobile device such as a smartphone or tablet, or not sufficiently robust and distinctive to identify correct matches from a large database. As a result, current mobile AR systems still only have limited capabilities, which greatly restrict their deployment in practice. In this paper, we propose a highly efficient, robust and distinctive binary descriptor, called Learning-based Local Difference Binary (LLDB). LLDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. To select an optimized set of grid cell pairs, we densely sample grid cells from an image patch and then leverage a modified AdaBoost algorithm to automatically extract a small set of critical ones with the goal of maximizing the Hamming distance between mismatches while minimizing it between matches. Experimental results demonstrate that LLDB is extremely fast to compute and to match against a large database due to its high robustness and distinctiveness. Compared to the state-of-the-art binary descriptors, primarily designed for speed, LLDB has similar efficiency for descriptor construction, while achieving a greater accuracy and faster matching speed when matching over a large database with 2.3M descriptors on mobile devices.

  5. Return Difference Feedback Design for Robust Uncertainty Tolerance in Stochastic Multivariable Control Systems.

    DTIC Science & Technology

    1984-07-01

    34robustness" analysis for multiloop feedback systems. Reference [55] describes a simple method based on the Perron - Frobenius Theory of non-negative...Viewpoint, " Operator Theory : Advances and Applications, 12, pp. 277-302, 1984. - E. A. Jonckheere, "New Bound on the Sensitivity -- of the Solution of...Reidel, Dordrecht, Holland, 1984. M. G. Safonov, "Comments on Singular Value Theory in Uncertain Feedback Systems, " to appear IEEE Trans. on Automatic

  6. SU-F-BRD-05: Robustness of Dose Painting by Numbers in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montero, A Barragan; Sterpin, E; Lee, J

    Purpose: Proton range uncertainties may cause important dose perturbations within the target volume, especially when steep dose gradients are present as in dose painting. The aim of this study is to assess the robustness against setup and range errors for high heterogeneous dose prescriptions (i.e., dose painting by numbers), delivered by proton pencil beam scanning. Methods: An automatic workflow, based on MATLAB functions, was implemented through scripting in RayStation (RaySearch Laboratories). It performs a gradient-based segmentation of the dose painting volume from 18FDG-PET images (GTVPET), and calculates the dose prescription as a linear function of the FDG-uptake value on eachmore » voxel. The workflow was applied to two patients with head and neck cancer. Robustness against setup and range errors of the conventional PTV margin strategy (prescription dilated by 2.5 mm) versus CTV-based (minimax) robust optimization (2.5 mm setup, 3% range error) was assessed by comparing the prescription with the planned dose for a set of error scenarios. Results: In order to ensure dose coverage above 95% of the prescribed dose in more than 95% of the GTVPET voxels while compensating for the uncertainties, the plans with a PTV generated a high overdose. For the nominal case, up to 35% of the GTVPET received doses 5% beyond prescription. For the worst of the evaluated error scenarios, the volume with 5% overdose increased to 50%. In contrast, for CTV-based plans this 5% overdose was present only in a small fraction of the GTVPET, which ranged from 7% in the nominal case to 15% in the worst of the evaluated scenarios. Conclusion: The use of a PTV leads to non-robust dose distributions with excessive overdose in the painted volume. In contrast, robust optimization yields robust dose distributions with limited overdose. RaySearch Laboratories is sincerely acknowledged for providing us with RayStation treatment planning system and for the support provided.« less

  7. Automatic identification of the reference system based on the fourth ventricular landmarks in T1-weighted MR images.

    PubMed

    Fu, Yili; Gao, Wenpeng; Chen, Xiaoguang; Zhu, Minwei; Shen, Weigao; Wang, Shuguo

    2010-01-01

    The reference system based on the fourth ventricular landmarks (including the fastigial point and ventricular floor plane) is used in medical image analysis of the brain stem. The objective of this study was to develop a rapid, robust, and accurate method for the automatic identification of this reference system on T1-weighted magnetic resonance images. The fully automated method developed in this study consisted of four stages: preprocessing of the data set, expectation-maximization algorithm-based extraction of the fourth ventricle in the region of interest, a coarse-to-fine strategy for identifying the fastigial point, and localization of the base point. The method was evaluated on 27 Brain Web data sets qualitatively and 18 Internet Brain Segmentation Repository data sets and 30 clinical scans quantitatively. The results of qualitative evaluation indicated that the method was robust to rotation, landmark variation, noise, and inhomogeneity. The results of quantitative evaluation indicated that the method was able to identify the reference system with an accuracy of 0.7 +/- 0.2 mm for the fastigial point and 1.1 +/- 0.3 mm for the base point. It took <6 seconds for the method to identify the related landmarks on a personal computer with an Intel Core 2 6300 processor and 2 GB of random-access memory. The proposed method for the automatic identification of the reference system based on the fourth ventricular landmarks was shown to be rapid, robust, and accurate. The method has potentially utility in image registration and computer-aided surgery.

  8. Popular song and lyrics synchronization and its application to music information retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Kai; Gao, Sheng; Zhu, Yongwei; Sun, Qibin

    2006-01-01

    An automatic synchronization system of the popular song and its lyrics is presented in the paper. The system includes two main components: a) automatically detecting vocal/non-vocal in the audio signal and b) automatically aligning the acoustic signal of the song with its lyric using speech recognition techniques and positioning the boundaries of the lyrics in its acoustic realization at the multiple levels simultaneously (e.g. the word / syllable level and phrase level). The GMM models and a set of HMM-based acoustic model units are carefully designed and trained for the detection and alignment. To eliminate the severe mismatch due to the diversity of musical signal and sparse training data available, the unsupervised adaptation technique such as maximum likelihood linear regression (MLLR) is exploited for tailoring the models to the real environment, which improves robustness of the synchronization system. To further reduce the effect of the missed non-vocal music on alignment, a novel grammar net is build to direct the alignment. As we know, this is the first automatic synchronization system only based on the low-level acoustic feature such as MFCC. We evaluate the system on a Chinese song dataset collecting from 3 popular singers. We obtain 76.1% for the boundary accuracy at the syllable level (BAS) and 81.5% for the boundary accuracy at the phrase level (BAP) using fully automatic vocal/non-vocal detection and alignment. The synchronization system has many applications such as multi-modality (audio and textual) content-based popular song browsing and retrieval. Through the study, we would like to open up the discussion of some challenging problems when developing a robust synchronization system for largescale database.

  9. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    PubMed

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA.

  10. Automatic humidification system to support the assessment of food drying processes

    NASA Astrophysics Data System (ADS)

    Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.

    2016-07-01

    This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.

  11. X33 Reusable Launch Vehicle Control on Sliding Modes: Concepts for a Control System Development

    NASA Technical Reports Server (NTRS)

    Shtessel, Yuri B.

    1998-01-01

    Control of the X33 reusable launch vehicle is considered. The launch control problem consists of automatic tracking of the launch trajectory which is assumed to be optimally precalculated. It requires development of a reliable, robust control algorithm that can automatically adjust to some changes in mission specifications (mass of payload, target orbit) and the operating environment (atmospheric perturbations, interconnection perturbations from the other subsystems of the vehicle, thrust deficiencies, failure scenarios). One of the effective control strategies successfully applied in nonlinear systems is the Sliding Mode Control. The main advantage of the Sliding Mode Control is that the system's state response in the sliding surface remains insensitive to certain parameter variations, nonlinearities and disturbances. Employing the time scaling concept, a new two (three)-loop structure of the control system for the X33 launch vehicle was developed. Smoothed sliding mode controllers were designed to robustly enforce the given closed-loop dynamics. Simulations of the 3-DOF model of the X33 launch vehicle with the table-look-up models for Euler angle reference profiles and disturbance torque profiles showed a very accurate, robust tracking performance.

  12. A real-time, practical sensor fault-tolerant module for robust EMG pattern recognition.

    PubMed

    Zhang, Xiaorong; Huang, He

    2015-02-19

    Unreliability of surface EMG recordings over time is a challenge for applying the EMG pattern recognition (PR)-controlled prostheses in clinical practice. Our previous study proposed a sensor fault-tolerant module (SFTM) by utilizing redundant information in multiple EMG signals. The SFTM consists of multiple sensor fault detectors and a self-recovery mechanism that can identify anomaly in EMG signals and remove the recordings of the disturbed signals from the input of the pattern classifier to recover the PR performance. While the proposed SFTM has shown great promise, the previous design is impractical. A practical SFTM has to be fast enough, lightweight, automatic, and robust under different conditions with or without disturbances. This paper presented a real-time, practical SFTM towards robust EMG PR. A novel fast LDA retraining algorithm and a fully automatic sensor fault detector based on outlier detection were developed, which allowed the SFTM to promptly detect disturbances and recover the PR performance immediately. These components of SFTM were then integrated with the EMG PR module and tested on five able-bodied subjects and a transradial amputee in real-time for classifying multiple hand and wrist motions under different conditions with different disturbance types and levels. The proposed fast LDA retraining algorithm significantly shortened the retraining time from nearly 1 s to less than 4 ms when tested on the embedded system prototype, which demonstrated the feasibility of a nearly "zero-delay" SFTM that is imperceptible to the users. The results of the real-time tests suggested that the SFTM was able to handle different types of disturbances investigated in this study and significantly improve the classification performance when one or multiple EMG signals were disturbed. In addition, the SFTM could also maintain the system's classification performance when there was no disturbance. This paper presented a real-time, lightweight, and automatic SFTM, which paved the way for reliable and robust EMG PR for prosthesis control.

  13. Automatic plankton image classification combining multiple view features via multiple kernel learning.

    PubMed

    Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing

    2017-12-28

    Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system outperforms state-of-the-art plankton image classification systems in terms of accuracy and robustness. This study demonstrated automatic plankton image classification system combining multiple view features using multiple kernel learning. The results indicated that multiple view features combined by NLMKL using three kernel functions (linear, polynomial and Gaussian kernel functions) can describe and use information of features better so that achieve a higher classification accuracy.

  14. Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Chin, Alexander W.; Marvis, Dimitri N.

    2014-01-01

    The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.

  15. Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.

    2016-01-01

    The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.

  16. RAPID COMMUNICATION: A novel time frequency-based 3D Lissajous figure method and its application to the determination of oxygen saturation from the photoplethysmogram

    NASA Astrophysics Data System (ADS)

    Addison, Paul S.; Watson, James N.

    2004-11-01

    We present a novel time-frequency method for the measurement of oxygen saturation using the photoplethysmogram (PPG) signals from a standard pulse oximeter machine. The method utilizes the time-frequency transformation of the red and infrared PPGs to derive a 3D Lissajous figure. By selecting the optimal Lissajous, the method provides an inherently robust basis for the determination of oxygen saturation as regions of the time-frequency plane where high- and low-frequency signal artefacts are to be found are automatically avoided.

  17. A challenging issue: Detection of white matter hyperintensities in neonatal brain MRI.

    PubMed

    Morel, Baptiste; Yongchao Xu; Virzi, Alessio; Geraud, Thierry; Adamsbaum, Catherine; Bloch, Isabelle

    2016-08-01

    The progress of magnetic resonance imaging (MRI) allows for a precise exploration of the brain of premature infants at term equivalent age. The so-called DEHSI (diffuse excessive high signal intensity) of the white matter of premature brains remains a challenging issue in terms of definition, and thus of interpretation. We propose a semi-automatic detection and quantification method of white matter hyperintensities in MRI relying on morphological operators and max-tree representations, which constitutes a powerful tool to help radiologists to improve their interpretation. Results show better reproducibility and robustness than interactive segmentation.

  18. GPU-accelerated automatic identification of robust beam setups for proton and carbon-ion radiotherapy

    NASA Astrophysics Data System (ADS)

    Ammazzalorso, F.; Bednarz, T.; Jelen, U.

    2014-03-01

    We demonstrate acceleration on graphic processing units (GPU) of automatic identification of robust particle therapy beam setups, minimizing negative dosimetric effects of Bragg peak displacement caused by treatment-time patient positioning errors. Our particle therapy research toolkit, RobuR, was extended with OpenCL support and used to implement calculation on GPU of the Port Homogeneity Index, a metric scoring irradiation port robustness through analysis of tissue density patterns prior to dose optimization and computation. Results were benchmarked against an independent native CPU implementation. Numerical results were in agreement between the GPU implementation and native CPU implementation. For 10 skull base cases, the GPU-accelerated implementation was employed to select beam setups for proton and carbon ion treatment plans, which proved to be dosimetrically robust, when recomputed in presence of various simulated positioning errors. From the point of view of performance, average running time on the GPU decreased by at least one order of magnitude compared to the CPU, rendering the GPU-accelerated analysis a feasible step in a clinical treatment planning interactive session. In conclusion, selection of robust particle therapy beam setups can be effectively accelerated on a GPU and become an unintrusive part of the particle therapy treatment planning workflow. Additionally, the speed gain opens new usage scenarios, like interactive analysis manipulation (e.g. constraining of some setup) and re-execution. Finally, through OpenCL portable parallelism, the new implementation is suitable also for CPU-only use, taking advantage of multiple cores, and can potentially exploit types of accelerators other than GPUs.

  19. Robust decentralized power system controller design: Integrated approach

    NASA Astrophysics Data System (ADS)

    Veselý, Vojtech

    2017-09-01

    A unique approach to the design of gain scheduled controller (GSC) is presented. The proposed design procedure is based on the Bellman-Lyapunov equation, guaranteed cost and robust stability conditions using the parameter dependent quadratic stability approach. The obtained feasible design procedures for robust GSC design are in the form of BMI with guaranteed convex stability conditions. The obtained design results and their properties are illustrated in the simultaneously design of controllers for simple model (6-order) turbogenerator. The results of the obtained design procedure are a PI automatic voltage regulator (AVR) for synchronous generator, a PI governor controller and a power system stabilizer for excitation system.

  20. Enabling Rapid and Robust Structural Analysis During Conceptual Design

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Padula, Sharon L.; Li, Wu

    2015-01-01

    This paper describes a multi-year effort to add a structural analysis subprocess to a supersonic aircraft conceptual design process. The desired capabilities include parametric geometry, automatic finite element mesh generation, static and aeroelastic analysis, and structural sizing. The paper discusses implementation details of the new subprocess, captures lessons learned, and suggests future improvements. The subprocess quickly compares concepts and robustly handles large changes in wing or fuselage geometry. The subprocess can rank concepts with regard to their structural feasibility and can identify promising regions of the design space. The automated structural analysis subprocess is deemed robust and rapid enough to be included in multidisciplinary conceptual design and optimization studies.

  1. An advancing front Delaunay triangulation algorithm designed for robustness

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1992-01-01

    A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.

  2. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less

  3. Ratbot automatic navigation by electrical reward stimulation based on distance measurement in unknown environments.

    PubMed

    Gao, Liqiang; Sun, Chao; Zhang, Chen; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.

  4. Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization

    NASA Astrophysics Data System (ADS)

    Takemiya, Tetsushi

    In modern aerospace engineering, the physics-based computational design method is becoming more important, as it is more efficient than experiments and because it is more suitable in designing new types of aircraft (e.g., unmanned aerial vehicles or supersonic business jets) than the conventional design method, which heavily relies on historical data. To enhance the reliability of the physics-based computational design method, researchers have made tremendous efforts to improve the fidelity of models. However, high-fidelity models require longer computational time, so the advantage of efficiency is partially lost. This problem has been overcome with the development of variable fidelity optimization (VFO). In VFO, different fidelity models are simultaneously employed in order to improve the speed and the accuracy of convergence in an optimization process. Among the various types of VFO methods, one of the most promising methods is the approximation management framework (AMF). In the AMF, objective and constraint functions of a low-fidelity model are scaled at a design point so that the scaled functions, which are referred to as "surrogate functions," match those of a high-fidelity model. Since scaling functions and the low-fidelity model constitutes surrogate functions, evaluating the surrogate functions is faster than evaluating the high-fidelity model. Therefore, in the optimization process, in which gradient-based optimization is implemented and thus many function calls are required, the surrogate functions are used instead of the high-fidelity model to obtain a new design point. The best feature of the AMF is that it may converge to a local optimum of the high-fidelity model in much less computational time than the high-fidelity model. However, through literature surveys and implementations of the AMF, the author xx found that (1) the AMF is very vulnerable when the computational analysis models have numerical noise, which is very common in high-fidelity models, and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite differentiation (FD) method, and then, the Robust AMF is implemented along with the sequential quadratic programming (SQP) optimization method with only high-fidelity models. The proposed AD method computes derivatives more accurately and faster than the FD method, and the Robust AMF successfully optimizes shapes of the airfoil and the wing in a much shorter time than SQP with only high-fidelity models. These results clearly show the effectiveness of the Robust AMF. Finally, the feasibility of reducing computational time for calculating derivatives and the necessity of AMF with an optimum design point always in the feasible region are discussed as future work.

  5. Automatic online spike sorting with singular value decomposition and fuzzy C-mean clustering

    PubMed Central

    2012-01-01

    Background Understanding how neurons contribute to perception, motor functions and cognition requires the reliable detection of spiking activity of individual neurons during a number of different experimental conditions. An important problem in computational neuroscience is thus to develop algorithms to automatically detect and sort the spiking activity of individual neurons from extracellular recordings. While many algorithms for spike sorting exist, the problem of accurate and fast online sorting still remains a challenging issue. Results Here we present a novel software tool, called FSPS (Fuzzy SPike Sorting), which is designed to optimize: (i) fast and accurate detection, (ii) offline sorting and (iii) online classification of neuronal spikes with very limited or null human intervention. The method is based on a combination of Singular Value Decomposition for fast and highly accurate pre-processing of spike shapes, unsupervised Fuzzy C-mean, high-resolution alignment of extracted spike waveforms, optimal selection of the number of features to retain, automatic identification the number of clusters, and quantitative quality assessment of resulting clusters independent on their size. After being trained on a short testing data stream, the method can reliably perform supervised online classification and monitoring of single neuron activity. The generalized procedure has been implemented in our FSPS spike sorting software (available free for non-commercial academic applications at the address: http://www.spikesorting.com) using LabVIEW (National Instruments, USA). We evaluated the performance of our algorithm both on benchmark simulated datasets with different levels of background noise and on real extracellular recordings from premotor cortex of Macaque monkeys. The results of these tests showed an excellent accuracy in discriminating low-amplitude and overlapping spikes under strong background noise. The performance of our method is competitive with respect to other robust spike sorting algorithms. Conclusions This new software provides neuroscience laboratories with a new tool for fast and robust online classification of single neuron activity. This feature could become crucial in situations when online spike detection from multiple electrodes is paramount, such as in human clinical recordings or in brain-computer interfaces. PMID:22871125

  6. Automatic online spike sorting with singular value decomposition and fuzzy C-mean clustering.

    PubMed

    Oliynyk, Andriy; Bonifazzi, Claudio; Montani, Fernando; Fadiga, Luciano

    2012-08-08

    Understanding how neurons contribute to perception, motor functions and cognition requires the reliable detection of spiking activity of individual neurons during a number of different experimental conditions. An important problem in computational neuroscience is thus to develop algorithms to automatically detect and sort the spiking activity of individual neurons from extracellular recordings. While many algorithms for spike sorting exist, the problem of accurate and fast online sorting still remains a challenging issue. Here we present a novel software tool, called FSPS (Fuzzy SPike Sorting), which is designed to optimize: (i) fast and accurate detection, (ii) offline sorting and (iii) online classification of neuronal spikes with very limited or null human intervention. The method is based on a combination of Singular Value Decomposition for fast and highly accurate pre-processing of spike shapes, unsupervised Fuzzy C-mean, high-resolution alignment of extracted spike waveforms, optimal selection of the number of features to retain, automatic identification the number of clusters, and quantitative quality assessment of resulting clusters independent on their size. After being trained on a short testing data stream, the method can reliably perform supervised online classification and monitoring of single neuron activity. The generalized procedure has been implemented in our FSPS spike sorting software (available free for non-commercial academic applications at the address: http://www.spikesorting.com) using LabVIEW (National Instruments, USA). We evaluated the performance of our algorithm both on benchmark simulated datasets with different levels of background noise and on real extracellular recordings from premotor cortex of Macaque monkeys. The results of these tests showed an excellent accuracy in discriminating low-amplitude and overlapping spikes under strong background noise. The performance of our method is competitive with respect to other robust spike sorting algorithms. This new software provides neuroscience laboratories with a new tool for fast and robust online classification of single neuron activity. This feature could become crucial in situations when online spike detection from multiple electrodes is paramount, such as in human clinical recordings or in brain-computer interfaces.

  7. Standard cell electrical and physical variability analysis based on automatic physical measurement for design-for-manufacturing purposes

    NASA Astrophysics Data System (ADS)

    Shauly, Eitan; Parag, Allon; Khmaisy, Hafez; Krispil, Uri; Adan, Ofer; Levi, Shimon; Latinski, Sergey; Schwarzband, Ishai; Rotstein, Israel

    2011-04-01

    A fully automated system for process variability analysis of high density standard cell was developed. The system consists of layout analysis with device mapping: device type, location, configuration and more. The mapping step was created by a simple DRC run-set. This database was then used as an input for choosing locations for SEM images and for specific layout parameter extraction, used by SPICE simulation. This method was used to analyze large arrays of standard cell blocks, manufactured using Tower TS013LV (Low Voltage for high-speed applications) Platforms. Variability of different physical parameters like and like Lgate, Line-width-roughness and more as well as of electrical parameters like drive current (Ion), off current (Ioff) were calculated and statistically analyzed, in order to understand the variability root cause. Comparison between transistors having the same W/L but with different layout configurations and different layout environments (around the transistor) was made in terms of performances as well as process variability. We successfully defined "robust" and "less-robust" transistors configurations, and updated guidelines for Design-for-Manufacturing (DfM).

  8. Automatic target detection using binary template matching

    NASA Astrophysics Data System (ADS)

    Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook

    2005-03-01

    This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.

  9. Modeling of electromagnetic brakes for enhanced braking capabilities

    NASA Astrophysics Data System (ADS)

    Kachroo, Pushkin; Ming, Qian

    1998-01-01

    In automatic highway systems, automatic brake actuation is a very important part of the overall control of the vehicle. Hence, a faster response and a robust braking system are crucial. This paper describes electromagnetic brakes as a supplementary system for regular friction brakes. This system provides better response time for emergency situations, and in general keeps the friction brake working longer and safer. A new mathematical model for electromagnetic brakes is proposed to describe their static characteristics. The performance of the new mathematical model is better than the other three models available in the literature.

  10. Shape and texture fused recognition of flying targets

    NASA Astrophysics Data System (ADS)

    Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás

    2011-06-01

    This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).

  11. Automatic threshold optimization in nonlinear energy operator based spike detection.

    PubMed

    Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M

    2016-08-01

    In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.

  12. Potential fault region detection in TFDS images based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Sun, Junhua; Xiao, Zhongwen

    2016-10-01

    In recent years, more than 300 sets of Trouble of Running Freight Train Detection System (TFDS) have been installed on railway to monitor the safety of running freight trains in China. However, TFDS is simply responsible for capturing, transmitting, and storing images, and fails to recognize faults automatically due to some difficulties such as such as the diversity and complexity of faults and some low quality images. To improve the performance of automatic fault recognition, it is of great importance to locate the potential fault areas. In this paper, we first introduce a convolutional neural network (CNN) model to TFDS and propose a potential fault region detection system (PFRDS) for simultaneously detecting four typical types of potential fault regions (PFRs). The experimental results show that this system has a higher performance of image detection to PFRs in TFDS. An average detection recall of 98.95% and precision of 100% are obtained, demonstrating the high detection ability and robustness against various poor imaging situations.

  13. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    PubMed

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  14. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    DOE PAGES

    O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; ...

    1995-01-01

    Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less

  15. Computer-based route-definition system for peripheral bronchoscopy.

    PubMed

    Graham, Michael W; Gibbs, Jason D; Higgins, William E

    2012-04-01

    Multi-detector computed tomography (MDCT) scanners produce high-resolution images of the chest. Given a patient's MDCT scan, a physician can use an image-guided intervention system to first plan and later perform bronchoscopy to diagnostic sites situated deep in the lung periphery. An accurate definition of complete routes through the airway tree leading to the diagnostic sites, however, is vital for avoiding navigation errors during image-guided bronchoscopy. We present a system for the robust definition of complete airway routes suitable for image-guided bronchoscopy. The system incorporates both automatic and semiautomatic MDCT analysis methods for this purpose. Using an intuitive graphical user interface, the user invokes automatic analysis on a patient's MDCT scan to produce a series of preliminary routes. Next, the user visually inspects each route and quickly corrects the observed route defects using the built-in semiautomatic methods. Application of the system to a human study for the planning and guidance of peripheral bronchoscopy demonstrates the efficacy of the system.

  16. The ALICE-HMPID Detector Control System: Its evolution towards an expert and adaptive system

    NASA Astrophysics Data System (ADS)

    De Cataldo, G.; Franco, A.; Pastore, C.; Sgura, I.; Volpe, G.

    2011-05-01

    The High Momentum Particle IDentification (HMPID) detector is a proximity focusing Ring Imaging Cherenkov (RICH) for charged hadron identification. The HMPID is based on liquid C 6F 14 as the radiator medium and on a 10 m 2 CsI coated, pad segmented photocathode of MWPCs for UV Cherenkov photon detection. To ensure full remote control, the HMPID is equipped with a detector control system (DCS) responding to industrial standards for robustness and reliability. It has been implemented using PVSS as Slow Control And Data Acquisition (SCADA) environment, Programmable Logic Controller as control devices and Finite State Machines for modular and automatic command execution. In the perspective of reducing human presence at the experiment site, this paper focuses on DCS evolution towards an expert and adaptive control system, providing, respectively, automatic error recovery and stable detector performance. HAL9000, the first prototype of the HMPID expert system, is then presented. Finally an analysis of the possible application of the adaptive features is provided.

  17. Multichannel analysis of surface wave method with the autojuggie

    USGS Publications Warehouse

    Tian, G.; Steeples, D.W.; Xia, J.; Miller, R.D.; Spikes, K.T.; Ralston, M.D.

    2003-01-01

    The shear (S)-wave velocity of near-surface materials and its effect on seismic-wave propagation are of fundamental interest in many engineering, environmental, and groundwater studies. The multichannel analysis of surface wave (MASW) method provides a robust, efficient, and accurate tool to observe near-surface S-wave velocity. A recently developed device used to place large numbers of closely spaced geophones simultaneously and automatically (the 'autojuggie') is shown here to be applicable to the collection of MASW data. In order to demonstrate the use of the autojuggie in the MASW method, we compared high-frequency surface-wave data acquired from conventionally planted geophones (control line) to data collected in parallel with the automatically planted geophones attached to steel bars (test line). The results demonstrate that the autojuggie can be applied in the MASW method. Implementation of the autojuggie in very shallow MASW surveys could drastically reduce the time required and costs incurred in such surveys. ?? 2003 Elsevier Science Ltd. All rights reserved.

  18. Automated planning of MRI scans of knee joints

    NASA Astrophysics Data System (ADS)

    Bystrov, Daniel; Pekar, Vladimir; Young, Stewart; Dries, Sebastian P. M.; Heese, Harald S.; van Muiswinkel, Arianne M.

    2007-03-01

    A novel and robust method for automatic scan planning of MRI examinations of knee joints is presented. Clinical knee examinations require acquisition of a 'scout' image, in which the operator manually specifies the scan volume orientations (off-centres, angulations, field-of-view) for the subsequent diagnostic scans. This planning task is time-consuming and requires skilled operators. The proposed automated planning system determines orientations for the diagnostic scan by using a set of anatomical landmarks derived by adapting active shape models of the femur, patella and tibia to the acquired scout images. The expert knowledge required to position scan geometries is learned from previous manually planned scans, allowing individual preferences to be taken into account. The system is able to automatically discriminate between left and right knees. This allows to use and merge training data from both left and right knees, and to automatically transform all learned scan geometries to the side for which a plan is required, providing a convenient integration of the automated scan planning system in the clinical routine. Assessment of the method on the basis of 88 images from 31 different individuals, exhibiting strong anatomical and positional variability demonstrates success, robustness and efficiency of all parts of the proposed approach, which thus has the potential to significantly improve the clinical workflow.

  19. Automatic segmentation of white matter hyperintensities robust to multicentre acquisition and pathological variability

    NASA Astrophysics Data System (ADS)

    Samaille, T.; Colliot, O.; Cuingnet, R.; Jouvent, E.; Chabriat, H.; Dormont, D.; Chupin, M.

    2012-02-01

    White matter hyperintensities (WMH), commonly seen on FLAIR images in elderly people, are a risk factor for dementia onset and have been associated with motor and cognitive deficits. We present here a method to fully automatically segment WMH from T1 and FLAIR images. Iterative steps of non linear diffusion followed by watershed segmentation were applied on FLAIR images until convergence. Diffusivity function and associated contrast parameter were carefully designed to adapt to WMH segmentation. It resulted in piecewise constant images with enhanced contrast between lesions and surrounding tissues. Selection of WMH areas was based on two characteristics: 1) a threshold automatically computed for intensity selection, 2) main location of areas in white matter. False positive areas were finally removed based on their proximity with cerebrospinal fluid/grey matter interface. Evaluation was performed on 67 patients: 24 with amnestic mild cognitive impairment (MCI), from five different centres, and 43 with Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoaraiosis (CADASIL) acquired in a single centre. Results showed excellent volume agreement with manual delineation (Pearson coefficient: r=0.97, p<0.001) and substantial spatial correspondence (Similarity Index: 72%+/-16%). Our method appeared robust to acquisition differences across the centres as well as to pathological variability.

  20. Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning.

    PubMed

    Davidson, Benjamin; Kalitzeos, Angelos; Carroll, Joseph; Dubra, Alfredo; Ourselin, Sebastien; Michaelides, Michel; Bergeles, Christos

    2018-05-21

    We present a robust deep learning framework for the automatic localisation of cone photoreceptor cells in Adaptive Optics Scanning Light Ophthalmoscope (AOSLO) split-detection images. Monitoring cone photoreceptors with AOSLO imaging grants an excellent view into retinal structure and health, provides new perspectives into well known pathologies, and allows clinicians to monitor the effectiveness of experimental treatments. The MultiDimensional Recurrent Neural Network (MDRNN) approach developed in this paper is the first method capable of reliably and automatically identifying cones in both healthy retinas and retinas afflicted with Stargardt disease. Therefore, it represents a leap forward in the computational image processing of AOSLO images, and can provide clinical support in on-going longitudinal studies of disease progression and therapy. We validate our method using images from healthy subjects and subjects with the inherited retinal pathology Stargardt disease, which significantly alters image quality and cone density. We conduct a thorough comparison of our method with current state-of-the-art methods, and demonstrate that the proposed approach is both more accurate and appreciably faster in localizing cones. As further validation to the method's robustness, we demonstrate it can be successfully applied to images of retinas with pathologies not present in the training data: achromatopsia, and retinitis pigmentosa.

  1. Automatic quantitative analysis of in-stent restenosis using FD-OCT in vivo intra-arterial imaging.

    PubMed

    Mandelias, Kostas; Tsantis, Stavros; Spiliopoulos, Stavros; Katsakiori, Paraskevi F; Karnabatidis, Dimitris; Nikiforidis, George C; Kagadis, George C

    2013-06-01

    A new segmentation technique is implemented for automatic lumen area extraction and stent strut detection in intravascular optical coherence tomography (OCT) images for the purpose of quantitative analysis of in-stent restenosis (ISR). In addition, a user-friendly graphical user interface (GUI) is developed based on the employed algorithm toward clinical use. Four clinical datasets of frequency-domain OCT scans of the human femoral artery were analyzed. First, a segmentation method based on fuzzy C means (FCM) clustering and wavelet transform (WT) was applied toward inner luminal contour extraction. Subsequently, stent strut positions were detected by utilizing metrics derived from the local maxima of the wavelet transform into the FCM membership function. The inner lumen contour and the position of stent strut were extracted with high precision. Compared to manual segmentation by an expert physician, the automatic lumen contour delineation had an average overlap value of 0.917 ± 0.065 for all OCT images included in the study. The strut detection procedure achieved an overall accuracy of 93.80% and successfully identified 9.57 ± 0.5 struts for every OCT image. Processing time was confined to approximately 2.5 s per OCT frame. A new fast and robust automatic segmentation technique combining FCM and WT for lumen border extraction and strut detection in intravascular OCT images was designed and implemented. The proposed algorithm integrated in a GUI represents a step forward toward the employment of automated quantitative analysis of ISR in clinical practice.

  2. A fast automatic target detection method for detecting ships in infrared scenes

    NASA Astrophysics Data System (ADS)

    Özertem, Kemal Arda

    2016-05-01

    Automatic target detection in infrared scenes is a vital task for many application areas like defense, security and border surveillance. For anti-ship missiles, having a fast and robust ship detection algorithm is crucial for overall system performance. In this paper, a straight-forward yet effective ship detection method for infrared scenes is introduced. First, morphological grayscale reconstruction is applied to the input image, followed by an automatic thresholding onto the suppressed image. For the segmentation step, connected component analysis is employed to obtain target candidate regions. At this point, it can be realized that the detection is defenseless to outliers like small objects with relatively high intensity values or the clouds. To deal with this drawback, a post-processing stage is introduced. For the post-processing stage, two different methods are used. First, noisy detection results are rejected with respect to target size. Second, the waterline is detected by using Hough transform and the detection results that are located above the waterline with a small margin are rejected. After post-processing stage, there are still undesired holes remaining, which cause to detect one object as multi objects or not to detect an object as a whole. To improve the detection performance, another automatic thresholding is implemented only to target candidate regions. Finally, two detection results are fused and post-processing stage is repeated to obtain final detection result. The performance of overall methodology is tested with real world infrared test data.

  3. Robust detrending, rereferencing, outlier detection, and inpainting for multichannel data.

    PubMed

    de Cheveigné, Alain; Arzounian, Dorothée

    2018-05-15

    Electroencephalography (EEG), magnetoencephalography (MEG) and related techniques are prone to glitches, slow drift, steps, etc., that contaminate the data and interfere with the analysis and interpretation. These artifacts are usually addressed in a preprocessing phase that attempts to remove them or minimize their impact. This paper offers a set of useful techniques for this purpose: robust detrending, robust rereferencing, outlier detection, data interpolation (inpainting), step removal, and filter ringing artifact removal. These techniques provide a less wasteful alternative to discarding corrupted trials or channels, and they are relatively immune to artifacts that disrupt alternative approaches such as filtering. Robust detrending allows slow drifts and common mode signals to be factored out while avoiding the deleterious effects of glitches. Robust rereferencing reduces the impact of artifacts on the reference. Inpainting allows corrupt data to be interpolated from intact parts based on the correlation structure estimated over the intact parts. Outlier detection allows the corrupt parts to be identified. Step removal fixes the high-amplitude flux jump artifacts that are common with some MEG systems. Ringing removal allows the ringing response of the antialiasing filter to glitches (steps, pulses) to be suppressed. The performance of the methods is illustrated and evaluated using synthetic data and data from real EEG and MEG systems. These methods, which are mainly automatic and require little tuning, can greatly improve the quality of the data. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  4. "Rate My Therapist": Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing.

    PubMed

    Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis G; Atkins, David C; Narayanan, Shrikanth S

    2015-01-01

    The technology for evaluating patient-provider interactions in psychotherapy-observational coding-has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies.

  5. Sma3s: a three-step modular annotator for large sequence datasets.

    PubMed

    Muñoz-Mérida, Antonio; Viguera, Enrique; Claros, M Gonzalo; Trelles, Oswaldo; Pérez-Pulido, Antonio J

    2014-08-01

    Automatic sequence annotation is an essential component of modern 'omics' studies, which aim to extract information from large collections of sequence data. Most existing tools use sequence homology to establish evolutionary relationships and assign putative functions to sequences. However, it can be difficult to define a similarity threshold that achieves sufficient coverage without sacrificing annotation quality. Defining the correct configuration is critical and can be challenging for non-specialist users. Thus, the development of robust automatic annotation techniques that generate high-quality annotations without needing expert knowledge would be very valuable for the research community. We present Sma3s, a tool for automatically annotating very large collections of biological sequences from any kind of gene library or genome. Sma3s is composed of three modules that progressively annotate query sequences using either: (i) very similar homologues, (ii) orthologous sequences or (iii) terms enriched in groups of homologous sequences. We trained the system using several random sets of known sequences, demonstrating average sensitivity and specificity values of ~85%. In conclusion, Sma3s is a versatile tool for high-throughput annotation of a wide variety of sequence datasets that outperforms the accuracy of other well-established annotation algorithms, and it can enrich existing database annotations and uncover previously hidden features. Importantly, Sma3s has already been used in the functional annotation of two published transcriptomes. © The Author 2014. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  6. Separable spectro-temporal Gabor filter bank features: Reducing the complexity of robust features for automatic speech recognition.

    PubMed

    Schädler, Marc René; Kollmeier, Birger

    2015-04-01

    To test if simultaneous spectral and temporal processing is required to extract robust features for automatic speech recognition (ASR), the robust spectro-temporal two-dimensional-Gabor filter bank (GBFB) front-end from Schädler, Meyer, and Kollmeier [J. Acoust. Soc. Am. 131, 4134-4151 (2012)] was de-composed into a spectral one-dimensional-Gabor filter bank and a temporal one-dimensional-Gabor filter bank. A feature set that is extracted with these separate spectral and temporal modulation filter banks was introduced, the separate Gabor filter bank (SGBFB) features, and evaluated on the CHiME (Computational Hearing in Multisource Environments) keywords-in-noise recognition task. From the perspective of robust ASR, the results showed that spectral and temporal processing can be performed independently and are not required to interact with each other. Using SGBFB features permitted the signal-to-noise ratio (SNR) to be lowered by 1.2 dB while still performing as well as the GBFB-based reference system, which corresponds to a relative improvement of the word error rate by 12.8%. Additionally, the real time factor of the spectro-temporal processing could be reduced by more than an order of magnitude. Compared to human listeners, the SNR needed to be 13 dB higher when using Mel-frequency cepstral coefficient features, 11 dB higher when using GBFB features, and 9 dB higher when using SGBFB features to achieve the same recognition performance.

  7. Learning-based image preprocessing for robust computer-aided detection

    NASA Astrophysics Data System (ADS)

    Raghupathi, Laks; Devarakota, Pandu R.; Wolf, Matthias

    2013-03-01

    Recent studies have shown that low dose computed tomography (LDCT) can be an effective screening tool to reduce lung cancer mortality. Computer-aided detection (CAD) would be a beneficial second reader for radiologists in such cases. Studies demonstrate that while iterative reconstructions (IR) improve LDCT diagnostic quality, it however degrades CAD performance significantly (increased false positives) when applied directly. For improving CAD performance, solutions such as retraining with newer data or applying a standard preprocessing technique may not be suffice due to high prevalence of CT scanners and non-uniform acquisition protocols. Here, we present a learning-based framework that can adaptively transform a wide variety of input data to boost an existing CAD performance. This not only enhances their robustness but also their applicability in clinical workflows. Our solution consists of applying a suitable pre-processing filter automatically on the given image based on its characteristics. This requires the preparation of ground truth (GT) of choosing an appropriate filter resulting in improved CAD performance. Accordingly, we propose an efficient consolidation process with a novel metric. Using key anatomical landmarks, we then derive consistent feature descriptors for the classification scheme that then uses a priority mechanism to automatically choose an optimal preprocessing filter. We demonstrate CAD prototype∗ performance improvement using hospital-scale datasets acquired from North America, Europe and Asia. Though we demonstrated our results for a lung nodule CAD, this scheme is straightforward to extend to other post-processing tools dedicated to other organs and modalities.

  8. A method for the automatic reconstruction of fetal cardiac signals from magnetocardiographic recordings

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Alleva, G.; Comani, S.

    2005-10-01

    Fetal magnetocardiography (fMCG) allows monitoring the fetal heart function through algorithms able to retrieve the fetal cardiac signal, but no standardized automatic model has become available so far. In this paper, we describe an automatic method that restores the fetal cardiac trace from fMCG recordings by means of a weighted summation of fetal components separated with independent component analysis (ICA) and identified through dedicated algorithms that analyse the frequency content and temporal structure of each source signal. Multichannel fMCG datasets of 66 healthy and 4 arrhythmic fetuses were used to validate the automatic method with respect to a classical procedure requiring the manual classification of fetal components by an expert investigator. ICA was run with input clusters of different dimensions to simulate various MCG systems. Detection rates, true negative and false positive component categorization, QRS amplitude, standard deviation and signal-to-noise ratio of reconstructed fetal signals, and real and per cent QRS differences between paired fetal traces retrieved automatically and manually were calculated to quantify the performances of the automatic method. Its robustness and reliability, particularly evident with the use of large input clusters, might increase the diagnostic role of fMCG during the prenatal period.

  9. NET-VISA, a Bayesian method next-generation automatic association software. Latest developments and operational assessment.

    NASA Astrophysics Data System (ADS)

    Le Bras, Ronan; Kushida, Noriyuki; Mialle, Pierrick; Tomuta, Elena; Arora, Nimar

    2017-04-01

    The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing a Bayesian method and software to perform the key step of automatic association of seismological, hydroacoustic, and infrasound (SHI) parametric data. In our preliminary testing in the CTBTO, NET_VISA shows much better performance than its currently operating automatic association module, with a rate for automatic events matching the analyst-reviewed events increased by 10%, signifying that the percentage of missed events is lowered by 40%. Initial tests involving analysts also showed that the new software will complete the automatic bulletins of the CTBTO by adding previously missed events. Because products by the CTBTO are also widely distributed to its member States as well as throughout the seismological community, the introduction of a new technology must be carried out carefully, and the first step of operational integration is to first use NET-VISA results within the interactive analysts' software so that the analysts can check the robustness of the Bayesian approach. We report on the latest results both on the progress for automatic processing and for the initial introduction of NET-VISA results in the analyst review process

  10. Robust spike classification based on frequency domain neural waveform features.

    PubMed

    Yang, Chenhui; Yuan, Yuan; Si, Jennie

    2013-12-01

    We introduce a new spike classification algorithm based on frequency domain features of the spike snippets. The goal for the algorithm is to provide high classification accuracy, low false misclassification, ease of implementation, robustness to signal degradation, and objectivity in classification outcomes. In this paper, we propose a spike classification algorithm based on frequency domain features (CFDF). It makes use of frequency domain contents of the recorded neural waveforms for spike classification. The self-organizing map (SOM) is used as a tool to determine the cluster number intuitively and directly by viewing the SOM output map. After that, spike classification can be easily performed using clustering algorithms such as the k-Means. In conjunction with our previously developed multiscale correlation of wavelet coefficient (MCWC) spike detection algorithm, we show that the MCWC and CFDF detection and classification system is robust when tested on several sets of artificial and real neural waveforms. The CFDF is comparable to or outperforms some popular automatic spike classification algorithms with artificial and real neural data. The detection and classification of neural action potentials or neural spikes is an important step in single-unit-based neuroscientific studies and applications. After the detection of neural snippets potentially containing neural spikes, a robust classification algorithm is applied for the analysis of the snippets to (1) extract similar waveforms into one class for them to be considered coming from one unit, and to (2) remove noise snippets if they do not contain any features of an action potential. Usually, a snippet is a small 2 or 3 ms segment of the recorded waveform, and differences in neural action potentials can be subtle from one unit to another. Therefore, a robust, high performance classification system like the CFDF is necessary. In addition, the proposed algorithm does not require any assumptions on statistical properties of the noise and proves to be robust under noise contamination.

  11. Comparison of spike-sorting algorithms for future hardware implementation.

    PubMed

    Gibson, Sarah; Judy, Jack W; Markovic, Dejan

    2008-01-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.

  12. Subgrouping Automata: automatic sequence subgrouping using phylogenetic tree-based optimum subgrouping algorithm.

    PubMed

    Seo, Joo-Hyun; Park, Jihyang; Kim, Eun-Mi; Kim, Juhan; Joo, Keehyoung; Lee, Jooyoung; Kim, Byung-Gee

    2014-02-01

    Sequence subgrouping for a given sequence set can enable various informative tasks such as the functional discrimination of sequence subsets and the functional inference of unknown sequences. Because an identity threshold for sequence subgrouping may vary according to the given sequence set, it is highly desirable to construct a robust subgrouping algorithm which automatically identifies an optimal identity threshold and generates subgroups for a given sequence set. To meet this end, an automatic sequence subgrouping method, named 'Subgrouping Automata' was constructed. Firstly, tree analysis module analyzes the structure of tree and calculates the all possible subgroups in each node. Sequence similarity analysis module calculates average sequence similarity for all subgroups in each node. Representative sequence generation module finds a representative sequence using profile analysis and self-scoring for each subgroup. For all nodes, average sequence similarities are calculated and 'Subgrouping Automata' searches a node showing statistically maximum sequence similarity increase using Student's t-value. A node showing the maximum t-value, which gives the most significant differences in average sequence similarity between two adjacent nodes, is determined as an optimum subgrouping node in the phylogenetic tree. Further analysis showed that the optimum subgrouping node from SA prevents under-subgrouping and over-subgrouping. Copyright © 2013. Published by Elsevier Ltd.

  13. Robust Multi-unit Auction Protocol against False-name Bids

    NASA Astrophysics Data System (ADS)

    Yokoo, Makoto; Sakurai, Yuko; Matsubara, Shigeo

    This paper presents a new multi-unit auction protocol (IR protocol) that is robust against false-name bids. Internet auctions have become an integral part of Electronic Commerce and a promising field for applying agent and Artificial Intelligence technologies. Although the Internet provides an excellent infrastructure for executing auctions, the possibility of a new type of cheating called false-name bids has been pointed out. A false-name bid is a bid submitted under a fictitious name. A protocol called LDS has been developed for combinatorial auctions of multiple different items and has proven to be robust against false-name bids. Although we can modify the LDS protocol to handle multi-unit auctions, in which multiple units of an identical item are auctioned, the protocol is complicated and requires the auctioneer to carefully pre-determine the combination of bundles to obtain a high social surplus or revenue. For the auctioneer, our newly developed IR protocol is easier to use than the LDS, since the combination of bundles is automatically determined in a flexible manner according to the declared evaluation values of agents. The evaluation results show that the IR protocol can obtain a better social surplus than that obtained by the LDS protocol.

  14. Development of a robust MRI fiducial system for automated fusion of MR-US abdominal images.

    PubMed

    Favazza, Christopher P; Gorny, Krzysztof R; Callstrom, Matthew R; Kurup, Anil N; Washburn, Michael; Trester, Pamela S; Fowler, Charles L; Hangiandreou, Nicholas J

    2018-05-21

    We present the development of a two-component magnetic resonance (MR) fiducial system, that is, a fiducial marker device combined with an auto-segmentation algorithm, designed to be paired with existing ultrasound probe tracking and image fusion technology to automatically fuse MR and ultrasound (US) images. The fiducial device consisted of four ~6.4 mL cylindrical wells filled with 1 g/L copper sulfate solution. The algorithm was designed to automatically segment the device in clinical abdominal MR images. The algorithm's detection rate and repeatability were investigated through a phantom study and in human volunteers. The detection rate was 100% in all phantom and human images. The center-of-mass of the fiducial device was robustly identified with maximum variations of 2.9 mm in position and 0.9° in angular orientation. In volunteer images, average differences between algorithm-measured inter-marker spacings and actual separation distances were 0.53 ± 0.36 mm. "Proof-of-concept" automatic MR-US fusions were conducted with sets of images from both a phantom and volunteer using a commercial prototype system, which was built based on the above findings. Image fusion accuracy was measured to be within 5 mm for breath-hold scanning. These results demonstrate the capability of this approach to automatically fuse US and MR images acquired across a wide range of clinical abdominal pulse sequences. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  15. SPEQTACLE: An automated generalized fuzzy C-means algorithm for tumor delineation in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapuyade-Lahorgue, Jérôme; Visvikis, Dimitris; Hatt, Mathieu, E-mail: hatt@univ-brest.fr

    Purpose: Accurate tumor delineation in positron emission tomography (PET) images is crucial in oncology. Although recent methods achieved good results, there is still room for improvement regarding tumors with complex shapes, low signal-to-noise ratio, and high levels of uptake heterogeneity. Methods: The authors developed and evaluated an original clustering-based method called spatial positron emission quantification of tumor—Automatic Lp-norm estimation (SPEQTACLE), based on the fuzzy C-means (FCM) algorithm with a generalization exploiting a Hilbertian norm to more accurately account for the fuzzy and non-Gaussian distributions of PET images. An automatic and reproducible estimation scheme of the norm on an image-by-image basismore » was developed. Robustness was assessed by studying the consistency of results obtained on multiple acquisitions of the NEMA phantom on three different scanners with varying acquisition parameters. Accuracy was evaluated using classification errors (CEs) on simulated and clinical images. SPEQTACLE was compared to another FCM implementation, fuzzy local information C-means (FLICM) and fuzzy locally adaptive Bayesian (FLAB). Results: SPEQTACLE demonstrated a level of robustness similar to FLAB (variability of 14% ± 9% vs 14% ± 7%, p = 0.15) and higher than FLICM (45% ± 18%, p < 0.0001), and improved accuracy with lower CE (14% ± 11%) over both FLICM (29% ± 29%) and FLAB (22% ± 20%) on simulated images. Improvement was significant for the more challenging cases with CE of 17% ± 11% for SPEQTACLE vs 28% ± 22% for FLAB (p = 0.009) and 40% ± 35% for FLICM (p < 0.0001). For the clinical cases, SPEQTACLE outperformed FLAB and FLICM (15% ± 6% vs 37% ± 14% and 30% ± 17%, p < 0.004). Conclusions: SPEQTACLE benefitted from the fully automatic estimation of the norm on a case-by-case basis. This promising approach will be extended to multimodal images and multiclass estimation in future developments.« less

  16. Characterization of Intraventricular and Intracerebral Hematomas in Non-Contrast CT

    PubMed Central

    Nowinski, Wieslaw L; Gomolka, Ryszard S; Qian, Guoyu; Gupta, Varsha; Ullman, Natalie L; Hanley, Daniel F

    2014-01-01

    Summary Characterization of hematomas is essential in scan reading, manual delineation, and designing automatic segmentation algorithms. Our purpose is to characterize the distribution of intraventricular (IVH) and intracerebral hematomas (ICH) in NCCT scans, study their relationship to gray matter (GM), and to introduce a new tool for quantitative hematoma delineation. We used 289 serial retrospective scans of 51 patients. Hematomas were manually delineated in a two-stage process. Hematoma contours generated in the first stage were quantified and enhanced in the second stage. Delineation was based on new quantitative rules and hematoma profiling, and assisted by a dedicated tool superimposing quantitative information on scans with 3D hematoma display. The tool provides: density maps (40-85HU), contrast maps (8/15HU), mean horizontal/vertical contrasts for hematoma contours, and hematoma contours below a specified mean contrast (8HU). White matter (WM) and GM were segmented automatically. IVH/ICH on serial NCCT is characterized by 59.0HU mean, 60.0HU median, 11.6HU standard deviation, 23.9HU mean contrast, –0.99HU/day slope, and –0.24 skewness (changing over time from negative to positive). Its 0.1st-99.9th percentile range corresponds to 25-88HU range. WM and GM are highly correlated (R 2=0.88; p<10–10) whereas the GM-GS correlation is weak (R 2=0.14; p<10–10). The intersection point of mean GM-hematoma density distributions is at 55.6±5.8HU with the corresponding GM/hematoma percentiles of 88th/40th. Objective characterization of IVH/ICH and stating the rules quantitatively will aid raters to delineate hematomas more robustly and facilitate designing algorithms for automatic hematoma segmentation. Our two-stage process is general and potentially applicable to delineate other pathologies on various modalities more robustly and quantitatively. PMID:24976197

  17. Characterization of intraventricular and intracerebral hematomas in non-contrast CT.

    PubMed

    Nowinski, Wieslaw L; Gomolka, Ryszard S; Qian, Guoyu; Gupta, Varsha; Ullman, Natalie L; Hanley, Daniel F

    2014-06-01

    Characterization of hematomas is essential in scan reading, manual delineation, and designing automatic segmentation algorithms. Our purpose is to characterize the distribution of intraventricular (IVH) and intracerebral hematomas (ICH) in NCCT scans, study their relationship to gray matter (GM), and to introduce a new tool for quantitative hematoma delineation. We used 289 serial retrospective scans of 51 patients. Hematomas were manually delineated in a two-stage process. Hematoma contours generated in the first stage were quantified and enhanced in the second stage. Delineation was based on new quantitative rules and hematoma profiling, and assisted by a dedicated tool superimposing quantitative information on scans with 3D hematoma display. The tool provides: density maps (40-85HU), contrast maps (8/15HU), mean horizontal/vertical contrasts for hematoma contours, and hematoma contours below a specified mean contrast (8HU). White matter (WM) and GM were segmented automatically. IVH/ICH on serial NCCT is characterized by 59.0HU mean, 60.0HU median, 11.6HU standard deviation, 23.9HU mean contrast, -0.99HU/day slope, and -0.24 skewness (changing over time from negative to positive). Its 0.1(st)-99.9(th) percentile range corresponds to 25-88HU range. WM and GM are highly correlated (R (2)=0.88; p<10(-10)) whereas the GM-GS correlation is weak (R (2)=0.14; p<10(-10)). The intersection point of mean GM-hematoma density distributions is at 55.6±5.8HU with the corresponding GM/hematoma percentiles of 88(th)/40(th). Objective characterization of IVH/ICH and stating the rules quantitatively will aid raters to delineate hematomas more robustly and facilitate designing algorithms for automatic hematoma segmentation. Our two-stage process is general and potentially applicable to delineate other pathologies on various modalities more robustly and quantitatively.

  18. ATLAS (Automatic Tool for Local Assembly Structures) - A Comprehensive Infrastructure for Assembly, Annotation, and Genomic Binning of Metagenomic and Metaranscripomic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Richard A.; Brown, Joseph M.; Colby, Sean M.

    ATLAS (Automatic Tool for Local Assembly Structures) is a comprehensive multiomics data analysis pipeline that is massively parallel and scalable. ATLAS contains a modular analysis pipeline for assembly, annotation, quantification and genome binning of metagenomics and metatranscriptomics data and a framework for reference metaproteomic database construction. ATLAS transforms raw sequence data into functional and taxonomic data at the microbial population level and provides genome-centric resolution through genome binning. ATLAS provides robust taxonomy based on majority voting of protein coding open reading frames rolled-up at the contig level using modified lowest common ancestor (LCA) analysis. ATLAS provides robust taxonomy based onmore » majority voting of protein coding open reading frames rolled-up at the contig level using modified lowest common ancestor (LCA) analysis. ATLAS is user-friendly, easy install through bioconda maintained as open-source on GitHub, and is implemented in Snakemake for modular customizable workflows.« less

  19. Hierarchical classification of dynamically varying radar pulse repetition interval modulation patterns.

    PubMed

    Kauppi, Jukka-Pekka; Martikainen, Kalle; Ruotsalainen, Ulla

    2010-12-01

    The central purpose of passive signal intercept receivers is to perform automatic categorization of unknown radar signals. Currently, there is an urgent need to develop intelligent classification algorithms for these devices due to emerging complexity of radar waveforms. Especially multifunction radars (MFRs) capable of performing several simultaneous tasks by utilizing complex, dynamically varying scheduled waveforms are a major challenge for automatic pattern classification systems. To assist recognition of complex radar emissions in modern intercept receivers, we have developed a novel method to recognize dynamically varying pulse repetition interval (PRI) modulation patterns emitted by MFRs. We use robust feature extraction and classifier design techniques to assist recognition in unpredictable real-world signal environments. We classify received pulse trains hierarchically which allows unambiguous detection of the subpatterns using a sliding window. Accuracy, robustness and reliability of the technique are demonstrated with extensive simulations using both static and dynamically varying PRI modulation patterns. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Automatic Rooftop Extraction in Stereo Imagery Using Distance and Building Shape Regularized Level Set Evolution

    NASA Astrophysics Data System (ADS)

    Tian, J.; Krauß, T.; d'Angelo, P.

    2017-05-01

    Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.

  1. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  2. Audio-visual imposture

    NASA Astrophysics Data System (ADS)

    Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard

    2006-05-01

    A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.

  3. Autofocusing and Polar Body Detection in Automated Cell Manipulation.

    PubMed

    Wang, Zenan; Feng, Chen; Ang, Wei Tech; Tan, Steven Yih Min; Latt, Win Tun

    2017-05-01

    Autofocusing and feature detection are two essential processes for performing automated biological cell manipulation tasks. In this paper, we have introduced a technique capable of focusing on a holding pipette and a mammalian cell under a bright-field microscope automatically, and a technique that can detect and track the presence and orientation of the polar body of an oocyte that is rotated at the tip of a micropipette. Both algorithms were evaluated by using mouse oocytes. Experimental results show that both algorithms achieve very high success rates: 100% and 96%. As robust and accurate image processing methods, they can be widely applied to perform various automated biological cell manipulations.

  4. Singularity-robustness and task-prioritization in configuration control of redundant robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.; Colbaugh, R.

    1990-01-01

    The authors present a singularity-robust task-prioritized reformulation of the configuration control for redundant robot manipulators. This reformation suppresses large joint velocities to induce minimal errors in the task performance by modifying the task trajectories. Furthermore, the same framework provides a means for assignment of priorities between the basic task of end-effector motion and the user-defined additional task for utilizing redundancy. This allows automatic relaxation of the additional task constraints in favor of the desired end-effector motion when both cannot be achieved exactly.

  5. Explicit robust schemes for implementation of general principal value-based constitutive models

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.

    1993-01-01

    The issue of developing effective and robust schemes to implement general hyperelastic constitutive models is addressed. To this end, special purpose functions are used to symbolically derive, evaluate, and automatically generate the associated FORTRAN code for the explicit forms of the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid for the entire deformation range. The analytical form of these explicit expressions is given here for the case in which the strain-energy potential is taken as a nonseparable polynomial function of the principle stretches.

  6. Why Does Rapid Naming Predict Chinese Word Reading?

    ERIC Educational Resources Information Center

    Shum, Kathy Kar-man; Au, Terry Kit-fong

    2017-01-01

    Rapid automatized naming (RAN) robustly predicts early reading abilities across languages, but its underlying mechanism remains unclear. This study found that RAN associated significantly with processing speed but not with phonological awareness or orthographic knowledge in 89 Hong Kong Chinese second-graders. RAN overlaps more with processing…

  7. Application of new methodologies based on design of experiments, independent component analysis and design space for robust optimization in liquid chromatography.

    PubMed

    Debrus, Benjamin; Lebrun, Pierre; Ceccato, Attilio; Caliaro, Gabriel; Rozet, Eric; Nistor, Iolanda; Oprean, Radu; Rupérez, Francisco J; Barbas, Coral; Boulanger, Bruno; Hubert, Philippe

    2011-04-08

    HPLC separations of an unknown sample mixture and a pharmaceutical formulation have been optimized using a recently developed chemometric methodology proposed by W. Dewé et al. in 2004 and improved by P. Lebrun et al. in 2008. This methodology is based on experimental designs which are used to model retention times of compounds of interest. Then, the prediction accuracy and the optimal separation robustness, including the uncertainty study, were evaluated. Finally, the design space (ICH Q8(R1) guideline) was computed as the probability for a criterion to lie in a selected range of acceptance. Furthermore, the chromatograms were automatically read. Peak detection and peak matching were carried out with a previously developed methodology using independent component analysis published by B. Debrus et al. in 2009. The present successful applications strengthen the high potential of these methodologies for the automated development of chromatographic methods. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle

    NASA Astrophysics Data System (ADS)

    Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun

    2018-05-01

    The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.

  9. Facial expression recognition based on weber local descriptor and sparse representation

    NASA Astrophysics Data System (ADS)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  10. Robustness and quality of precipitation and river flow data obtained through participatory monitoring and citizen scienc

    NASA Astrophysics Data System (ADS)

    Buytaert, W.; Ochoa-Tocachi, B. F.

    2016-12-01

    Apart for the most basic measurements of manual rain and staff gauges, hydrology and water resources are not an evident disciplines for the application of citizen science. High-resolution measurements require elaborate equipment, installation, and maintenance that is typically beyond the scope of non-scientists. Additionally, hydrological analysis has traditionally relied upon long time series of consistent accuracy and precision. Nevertheless, new opportunities for public participation in hydrological research are emerging, driven by increasingly affordable, robust, and more user-friendly technology. Here we analyse the results generated by participatory monitoring of river flow and precipitation in around 30 catchments in the tropical Andes. This monitoring network was set up through a collaborative effort between scientists, NGOs and local communities, with the intention to generate evidence about the impact of land-use change on streamflow. Monitoring was implemented using automatic but low-cost sensors operated and maintained by local users. Tipping bucket rain gauges are used for precipitation, and river flow is monitored with pressure transducers in combination with a V-notch weir to obtain a stable stage-discharge relation. Jointly, the sensors have now collected an equivalent of more than 30 years of data, with a measurement interval of typically 5 or 15 minutes. Analysing the data, we find that the observations themselves tend to be of a quality comparable to scientific observations. However, main issues are related to the continuity of the time series, as sensors eventually fail or run out of capacity in dataloggers or batteries in the most remote locations. Despite these shortcomings, the data have proven to be useful in characterizing land-use impacts well beyond what can be achieved with conventional data collection, thus filling long-standing gaps in local hydrological knowledge. Furthermore, we expect that the advent of new, more robust, resilient, and automatized sensor technologies will alleviate some of the current issues.

  11. Semi-automatic brain tumor segmentation by constrained MRFs using structural trajectories.

    PubMed

    Zhao, Liang; Wu, Wei; Corso, Jason J

    2013-01-01

    Quantifying volume and growth of a brain tumor is a primary prognostic measure and hence has received much attention in the medical imaging community. Most methods have sought a fully automatic segmentation, but the variability in shape and appearance of brain tumor has limited their success and further adoption in the clinic. In reaction, we present a semi-automatic brain tumor segmentation framework for multi-channel magnetic resonance (MR) images. This framework does not require prior model construction and only requires manual labels on one automatically selected slice. All other slices are labeled by an iterative multi-label Markov random field optimization with hard constraints. Structural trajectories-the medical image analog to optical flow and 3D image over-segmentation are used to capture pixel correspondences between consecutive slices for pixel labeling. We show robustness and effectiveness through an evaluation on the 2012 MICCAI BRATS Challenge Dataset; our results indicate superior performance to baselines and demonstrate the utility of the constrained MRF formulation.

  12. Chemometric strategy for automatic chromatographic peak detection and background drift correction in chromatographic data.

    PubMed

    Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long

    2014-09-12

    Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Neural networks: Alternatives to conventional techniques for automatic docking

    NASA Technical Reports Server (NTRS)

    Vinz, Bradley L.

    1994-01-01

    Automatic docking of orbiting spacecraft is a crucial operation involving the identification of vehicle orientation as well as complex approach dynamics. The chaser spacecraft must be able to recognize the target spacecraft within a scene and achieve accurate closing maneuvers. In a video-based system, a target scene must be captured and transformed into a pattern of pixels. Successful recognition lies in the interpretation of this pattern. Due to their powerful pattern recognition capabilities, artificial neural networks offer a potential role in interpretation and automatic docking processes. Neural networks can reduce the computational time required by existing image processing and control software. In addition, neural networks are capable of recognizing and adapting to changes in their dynamic environment, enabling enhanced performance, redundancy, and fault tolerance. Most neural networks are robust to failure, capable of continued operation with a slight degradation in performance after minor failures. This paper discusses the particular automatic docking tasks neural networks can perform as viable alternatives to conventional techniques.

  14. An automatic multi-atlas prostate segmentation in MRI using a multiscale representation and a label fusion strategy

    NASA Astrophysics Data System (ADS)

    Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo

    2015-01-01

    The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.

  15. Outpatient Treatment of Dyslexia through Stimulation of the Cerebral Hemispheres.

    ERIC Educational Resources Information Center

    Kappers, E. Jan

    1997-01-01

    Integrated treatment methods of neuropsychological and cognitive origin were evaluated with 80 Dutch children (ages 6-15) with severe dyslexia. Treatment with flash cards, which exercised automatic letter-sound conversions, had a robust and slight effect in preclinical and clinical phases respectively, whereas hemisphere stimulation produced…

  16. Early, Equivalent ERP Masked Priming Effects for Regular and Irregular Morphology

    ERIC Educational Resources Information Center

    Morris, Joanna; Stockall, Linnaea

    2012-01-01

    Converging evidence from behavioral masked priming (Rastle & Davis, 2008), EEG masked priming (Morris, Frank, Grainger, & Holcomb, 2007) and single word MEG (Zweig & Pylkkanen, 2008) experiments has provided robust support for a model of lexical processing which includes an early, automatic, visual word form based stage of morphological parsing…

  17. Automatic multiresolution age-related macular degeneration detection from fundus images

    NASA Astrophysics Data System (ADS)

    Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.

  18. iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization.

    PubMed

    Blenkmann, Alejandro O; Phillips, Holly N; Princich, Juan P; Rowe, James B; Bekinschtein, Tristan A; Muravchik, Carlos H; Kochen, Silvia

    2017-01-01

    The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2-3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions.

  19. iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization

    PubMed Central

    Blenkmann, Alejandro O.; Phillips, Holly N.; Princich, Juan P.; Rowe, James B.; Bekinschtein, Tristan A.; Muravchik, Carlos H.; Kochen, Silvia

    2017-01-01

    The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2–3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions. PMID:28303098

  20. Automatic segmentation and co-registration of gated CT angiography datasets: measuring abdominal aortic pulsatility

    NASA Astrophysics Data System (ADS)

    Wentz, Robert; Manduca, Armando; Fletcher, J. G.; Siddiki, Hassan; Shields, Raymond C.; Vrtiska, Terri; Spencer, Garrett; Primak, Andrew N.; Zhang, Jie; Nielson, Theresa; McCollough, Cynthia; Yu, Lifeng

    2007-03-01

    Purpose: To develop robust, novel segmentation and co-registration software to analyze temporally overlapping CT angiography datasets, with an aim to permit automated measurement of regional aortic pulsatility in patients with abdominal aortic aneurysms. Methods: We perform retrospective gated CT angiography in patients with abdominal aortic aneurysms. Multiple, temporally overlapping, time-resolved CT angiography datasets are reconstructed over the cardiac cycle, with aortic segmentation performed using a priori anatomic assumptions for the aorta and heart. Visual quality assessment is performed following automatic segmentation with manual editing. Following subsequent centerline generation, centerlines are cross-registered across phases, with internal validation of co-registration performed by examining registration at the regions of greatest diameter change (i.e. when the second derivative is maximal). Results: We have performed gated CT angiography in 60 patients. Automatic seed placement is successful in 79% of datasets, requiring either no editing (70%) or minimal editing (less than 1 minute; 12%). Causes of error include segmentation into adjacent, high-attenuating, nonvascular tissues; small segmentation errors associated with calcified plaque; and segmentation of non-renal, small paralumbar arteries. Internal validation of cross-registration demonstrates appropriate registration in our patient population. In general, we observed that aortic pulsatility can vary along the course of the abdominal aorta. Pulsation can also vary within an aneurysm as well as between aneurysms, but the clinical significance of these findings remain unknown. Conclusions: Visualization of large vessel pulsatility is possible using ECG-gated CT angiography, partial scan reconstruction, automatic segmentation, centerline generation, and coregistration of temporally resolved datasets.

  1. Robust quantum control using smooth pulses and topological winding

    NASA Astrophysics Data System (ADS)

    Barnes, Edwin; Wang, Xin

    2015-03-01

    Perhaps the greatest challenge in achieving control of microscopic quantum systems is the decoherence induced by the environment, a problem which pervades experimental quantum physics and is particularly severe in the context of solid state quantum computing and nanoscale quantum devices because of the inherently strong coupling to the surrounding material. We present an analytical approach to constructing intrinsically robust driving fields which automatically cancel the leading-order noise-induced errors in a qubit's evolution exactly. We address two of the most common types of non-Markovian noise that arise in qubits: slow fluctuations of the qubit energy splitting and fluctuations in the driving field itself. We demonstrate our method by constructing robust quantum gates for several types of spin qubits, including phosphorous donors in silicon and nitrogen-vacancy centers in diamond. Our results constitute an important step toward achieving robust generic control of quantum systems, bringing their novel applications closer to realization. Work supported by LPS-CMTC.

  2. System transfer modelling for automatic target recognizer evaluations

    NASA Astrophysics Data System (ADS)

    Clark, Lloyd G.

    1991-11-01

    Image processing to accomplish automatic recognition of military vehicles has promised increased weapons systems effectiveness and reduced timelines for a number of Department of Defense missions. Automatic Target Recognizers (ATR) are often claimed to be able to recognize many different ground vehicles as possible targets in military air-to- surface targeting applications. The targeting scenario conditions include different vehicle poses and histories as well as a variety of imaging geometries, intervening atmospheres, and background environments. Testing these ATR subsystems in most cases has been limited to a handful of the scenario conditions of interest, as is represented by imagery collected with the desired imaging sensor. The question naturally arises as to how robust the performance of the ATR is for all scenario conditions of interest, not just for the set of imagery upon which an algorithm was trained.

  3. Disentangling Complexity in Bayesian Automatic Adaptive Quadrature

    NASA Astrophysics Data System (ADS)

    Adam, Gheorghe; Adam, Sanda

    2018-02-01

    The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.

  4. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim

    NASA Astrophysics Data System (ADS)

    Becker, S.; Peter, M.; Fritsch, D.

    2015-03-01

    The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.

  5. Geometric registration of remotely sensed data with SAMIR

    NASA Astrophysics Data System (ADS)

    Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto

    2015-06-01

    The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.

  6. Joint detection and localization of multiple anatomical landmarks through learning

    NASA Astrophysics Data System (ADS)

    Dikmen, Mert; Zhan, Yiqiang; Zhou, Xiang Sean

    2008-03-01

    Reliable landmark detection in medical images provides the essential groundwork for successful automation of various open problems such as localization, segmentation, and registration of anatomical structures. In this paper, we present a learning-based system to jointly detect (is it there?) and localize (where?) multiple anatomical landmarks in medical images. The contributions of this work exist in two aspects. First, this method takes the advantage from the learning scenario that is able to automatically extract the most distinctive features for multi-landmark detection. Therefore, it is easily adaptable to detect arbitrary landmarks in various kinds of imaging modalities, e.g., CT, MRI and PET. Second, the use of multi-class/cascaded classifier architecture in different phases of the detection stage combined with robust features that are highly efficient in terms of computation time enables a seemingly real time performance, with very high localization accuracy. This method is validated on CT scans of different body sections, e.g., whole body scans, chest scans and abdominal scans. Aside from improved robustness (due to the exploitation of spatial correlations), it gains a run time efficiency in landmark detection. It also shows good scalability performance under increasing number of landmarks.

  7. "Rate My Therapist": Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing

    PubMed Central

    Xiao, Bo; Imel, Zac E.; Georgiou, Panayiotis G.; Atkins, David C.; Narayanan, Shrikanth S.

    2015-01-01

    The technology for evaluating patient-provider interactions in psychotherapy–observational coding–has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies. PMID:26630392

  8. Accurate and Standardized Coronary Wave Intensity Analysis.

    PubMed

    Rivolo, Simone; Patterson, Tiffany; Asrress, Kaleab N; Marber, Michael; Redwood, Simon; Smith, Nicolas P; Lee, Jack

    2017-05-01

    Coronary wave intensity analysis (cWIA) has increasingly been applied in the clinical research setting to distinguish between the proximal and distal mechanical influences on coronary blood flow. Recently, a cWIA-derived clinical index demonstrated prognostic value in predicting functional recovery postmyocardial infarction. Nevertheless, the known operator dependence of the cWIA metrics currently hampers its routine application in clinical practice. Specifically, it was recently demonstrated that the cWIA metrics are highly dependent on the chosen Savitzky-Golay filter parameters used to smooth the acquired traces. Therefore, a novel method to make cWIA standardized and automatic was proposed and evaluated in vivo. The novel approach combines an adaptive Savitzky-Golay filter with high-order central finite differencing after ensemble-averaging the acquired waveforms. Its accuracy was assessed using in vivo human data. The proposed approach was then modified to automatically perform beat wise cWIA. Finally, the feasibility (accuracy and robustness) of the method was evaluated. The automatic cWIA algorithm provided satisfactory accuracy under a wide range of noise scenarios (≤10% and ≤20% error in the estimation of wave areas and peaks, respectively). These results were confirmed when beat-by-beat cWIA was performed. An accurate, standardized, and automated cWIA was developed. Moreover, the feasibility of beat wise cWIA was demonstrated for the first time. The proposed algorithm provides practitioners with a standardized technique that could broaden the application of cWIA in the clinical practice as enabling multicenter trials. Furthermore, the demonstrated potential of beatwise cWIA opens the possibility investigating the coronary physiology in real time.

  9. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    PubMed

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Automated segmentation of myocardial scar in late enhancement MRI using combined intensity and spatial information.

    PubMed

    Tao, Qian; Milles, Julien; Zeppenfeld, Katja; Lamb, Hildo J; Bax, Jeroen J; Reiber, Johan H C; van der Geest, Rob J

    2010-08-01

    Accurate assessment of the size and distribution of a myocardial infarction (MI) from late gadolinium enhancement (LGE) MRI is of significant prognostic value for postinfarction patients. In this paper, an automatic MI identification method combining both intensity and spatial information is presented in a clear framework of (i) initialization, (ii) false acceptance removal, and (iii) false rejection removal. The method was validated on LGE MR images of 20 chronic postinfarction patients, using manually traced MI contours from two independent observers as reference. Good agreement was observed between automatic and manual MI identification. Validation results showed that the average Dice indices, which describe the percentage of overlap between two regions, were 0.83 +/- 0.07 and 0.79 +/- 0.08 between the automatic identification and the manual tracing from observer 1 and observer 2, and the errors in estimated infarct percentage were 0.0 +/- 1.9% and 3.8 +/- 4.7% compared with observer 1 and observer 2. The difference between the automatic method and manual tracing is in the order of interobserver variation. In conclusion, the developed automatic method is accurate and robust in MI delineation, providing an objective tool for quantitative assessment of MI in LGE MR imaging.

  11. Automatic detection and notification of "wrong patient-wrong location'' errors in the operating room.

    PubMed

    Sandberg, Warren S; Häkkinen, Matti; Egan, Marie; Curran, Paige K; Fairbrother, Pamela; Choquette, Ken; Daily, Bethany; Sarkka, Jukka-Pekka; Rattner, David

    2005-09-01

    When procedures and processes to assure patient location based on human performance do not work as expected, patients are brought incrementally closer to a possible "wrong patient-wrong procedure'' error. We developed a system for automated patient location monitoring and management. Real-time data from an active infrared/radio frequency identification tracking system provides patient location data that are robust and can be compared with an "expected process'' model to automatically flag wrong-location events as soon as they occur. The system also generates messages that are automatically sent to process managers via the hospital paging system, thus creating an active alerting function to annunciate errors. We deployed the system to detect and annunciate "patient-in-wrong-OR'' events. The system detected all "wrong-operating room (OR)'' events, and all "wrong-OR'' locations were correctly assigned within 0.50+/-0.28 minutes (mean+/-SD). This corresponded to the measured latency of the tracking system. All wrong-OR events were correctly annunciated via the paging function. This experiment demonstrates that current technology can automatically collect sufficient data to remotely monitor patient flow through a hospital, provide decision support based on predefined rules, and automatically notify stakeholders of errors.

  12. Semi-automatic segmentation of brain tumors using population and individual information.

    PubMed

    Wu, Yao; Yang, Wei; Jiang, Jun; Li, Shuanqian; Feng, Qianjin; Chen, Wufan

    2013-08-01

    Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.

  13. Automatic segmentation of 4D cardiac MR images for extraction of ventricular chambers using a spatio-temporal approach

    NASA Astrophysics Data System (ADS)

    Atehortúa, Angélica; Zuluaga, Maria A.; Ourselin, Sébastien; Giraldo, Diana; Romero, Eduardo

    2016-03-01

    An accurate ventricular function quantification is important to support evaluation, diagnosis and prognosis of several cardiac pathologies. However, expert heart delineation, specifically for the right ventricle, is a time consuming task with high inter-and-intra observer variability. A fully automatic 3D+time heart segmentation framework is herein proposed for short-axis-cardiac MRI sequences. This approach estimates the heart using exclusively information from the sequence itself without tuning any parameters. The proposed framework uses a coarse-to-fine approach, which starts by localizing the heart via spatio-temporal analysis, followed by a segmentation of the basal heart that is then propagated to the apex by using a non-rigid-registration strategy. The obtained volume is then refined by estimating the ventricular muscle by locally searching a prior endocardium- pericardium intensity pattern. The proposed framework was applied to 48 patients datasets supplied by the organizers of the MICCAI 2012 Right Ventricle segmentation challenge. Results show the robustness, efficiency and competitiveness of the proposed method both in terms of accuracy and computational load.

  14. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  15. Geometrically Flexible and Efficient Flow Analysis of High Speed Vehicles Via Domain Decomposition, Part 1: Unstructured-Grid Solver for High Speed Flows

    NASA Technical Reports Server (NTRS)

    White, Jeffery A.; Baurle, Robert A.; Passe, Bradley J.; Spiegel, Seth C.; Nishikawa, Hiroaki

    2017-01-01

    The ability to solve the equations governing the hypersonic turbulent flow of a real gas on unstructured grids using a spatially-elliptic, 2nd-order accurate, cell-centered, finite-volume method has been recently implemented in the VULCAN-CFD code. This paper describes the key numerical methods and techniques that were found to be required to robustly obtain accurate solutions to hypersonic flows on non-hex-dominant unstructured grids. The methods and techniques described include: an augmented stencil, weighted linear least squares, cell-average gradient method, a robust multidimensional cell-average gradient-limiter process that is consistent with the augmented stencil of the cell-average gradient method and a cell-face gradient method that contains a cell skewness sensitive damping term derived using hyperbolic diffusion based concepts. A data-parallel matrix-based symmetric Gauss-Seidel point-implicit scheme, used to solve the governing equations, is described and shown to be more robust and efficient than a matrix-free alternative. In addition, a y+ adaptive turbulent wall boundary condition methodology is presented. This boundary condition methodology is deigned to automatically switch between a solve-to-the-wall and a wall-matching-function boundary condition based on the local y+ of the 1st cell center off the wall. The aforementioned methods and techniques are then applied to a series of hypersonic and supersonic turbulent flat plate unit tests to examine the efficiency, robustness and convergence behavior of the implicit scheme and to determine the ability of the solve-to-the-wall and y+ adaptive turbulent wall boundary conditions to reproduce the turbulent law-of-the-wall. Finally, the thermally perfect, chemically frozen, Mach 7.8 turbulent flow of air through a scramjet flow-path is computed and compared with experimental data to demonstrate the robustness, accuracy and convergence behavior of the unstructured-grid solver for a realistic 3-D geometry on a non-hex-dominant grid.

  16. Explicit robust schemes for implementation of a class of principal value-based constitutive models: Symbolic and numeric implementation

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.

    1993-01-01

    The issue of developing effective and robust schemes to implement a class of the Ogden-type hyperelastic constitutive models is addressed. To this end, special purpose functions (running under MACSYMA) are developed for the symbolic derivation, evaluation, and automatic FORTRAN code generation of explicit expressions for the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid over the entire deformation range, since the singularities resulting from repeated principal-stretch values have been theoretically removed. The required computational algorithms are outlined, and the resulting FORTRAN computer code is presented.

  17. Infrared target recognition based on improved joint local ternary pattern

    NASA Astrophysics Data System (ADS)

    Sun, Junding; Wu, Xiaosheng

    2016-05-01

    This paper presents a simple, efficient, yet robust approach, named joint orthogonal combination of local ternary pattern, for automatic forward-looking infrared target recognition. It gives more advantages to describe the macroscopic textures and microscopic textures by fusing variety of scales than the traditional LBP-based methods. In addition, it can effectively reduce the feature dimensionality. Further, the rotation invariant and uniform scheme, the robust LTP, and soft concave-convex partition are introduced to enhance its discriminative power. Experimental results demonstrate that the proposed method can achieve competitive results compared with the state-of-the-art methods.

  18. Optimized feature-detection for on-board vision-based surveillance

    NASA Astrophysics Data System (ADS)

    Gond, Laetitia; Monnin, David; Schneider, Armin

    2012-06-01

    The detection and matching of robust features in images is an important step in many computer vision applications. In this paper, the importance of the keypoint detection algorithms and their inherent parameters in the particular context of an image-based change detection system for IED detection is studied. Through extensive application-oriented experiments, we draw an evaluation and comparison of the most popular feature detectors proposed by the computer vision community. We analyze how to automatically adjust these algorithms to changing imaging conditions and suggest improvements in order to achieve more exibility and robustness in their practical implementation.

  19. Model-based vision using geometric hashing

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander, III; Patton, Ronald

    1991-04-01

    The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.

  20. Morphological self-organizing feature map neural network with applications to automatic target recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Shijun; Jing, Zhongliang; Li, Jianxun

    2005-01-01

    The rotation invariant feature of the target is obtained using the multi-direction feature extraction property of the steerable filter. Combining the morphological operation top-hat transform with the self-organizing feature map neural network, the adaptive topological region is selected. Using the erosion operation, the topological region shrinkage is achieved. The steerable filter based morphological self-organizing feature map neural network is applied to automatic target recognition of binary standard patterns and real-world infrared sequence images. Compared with Hamming network and morphological shared-weight networks respectively, the higher recognition correct rate, robust adaptability, quick training, and better generalization of the proposed method are achieved.

  1. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  2. Automatic identification of bacterial types using statistical imaging methods

    NASA Astrophysics Data System (ADS)

    Trattner, Sigal; Greenspan, Hayit; Tepper, Gapi; Abboud, Shimon

    2003-05-01

    The objective of the current study is to develop an automatic tool to identify bacterial types using computer-vision and statistical modeling techniques. Bacteriophage (phage)-typing methods are used to identify and extract representative profiles of bacterial types, such as the Staphylococcus Aureus. Current systems rely on the subjective reading of plaque profiles by human expert. This process is time-consuming and prone to errors, especially as technology is enabling the increase in the number of phages used for typing. The statistical methodology presented in this work, provides for an automated, objective and robust analysis of visual data, along with the ability to cope with increasing data volumes.

  3. Research on gait-based human identification

    NASA Astrophysics Data System (ADS)

    Li, Youguo

    Gait recognition refers to automatic identification of individual based on his/her style of walking. This paper proposes a gait recognition method based on Continuous Hidden Markov Model with Mixture of Gaussians(G-CHMM). First, we initialize a Gaussian mix model for training image sequence with K-means algorithm, then train the HMM parameters using a Baum-Welch algorithm. These gait feature sequences can be trained and obtain a Continuous HMM for every person, therefore, the 7 key frames and the obtained HMM can represent each person's gait sequence. Finally, the recognition is achieved by Front algorithm. The experiments made on CASIA gait databases obtain comparatively high correction identification ratio and comparatively strong robustness for variety of bodily angle.

  4. Practical gigahertz quantum key distribution robust against channel disturbance.

    PubMed

    Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; He, De-Yong; Hui, Cong; Hao, Peng-Lei; Fan-Yuan, Guan-Jie; Wang, Chao; Zhang, Li-Jun; Kuang, Jie; Liu, Shu-Feng; Zhou, Zheng; Wang, Yong-Gang; Guo, Guang-Can; Han, Zheng-Fu

    2018-05-01

    Quantum key distribution (QKD) provides an attractive solution for secure communication. However, channel disturbance severely limits its application when a QKD system is transferred from the laboratory to the field. Here a high-speed Faraday-Sagnac-Michelson QKD system is proposed that can automatically compensate for the channel polarization disturbance, which largely avoids the intermittency limitations of environment mutation. Over a 50 km fiber channel with 30 Hz polarization scrambling, the practicality of this phase-coding QKD system was characterized with an interference fringe visibility of 99.35% over 24 h and a stable secure key rate of 306 k bits/s over seven days without active polarization alignment.

  5. Mobile/android application for QRS detection using zero cross method

    NASA Astrophysics Data System (ADS)

    Rizqyawan, M. I.; Simbolon, A. I.; Suhendra, M. A.; Amri, M. F.; Kusumandari, D. E.

    2018-03-01

    In automatic ECG signal processing, one of the main topics of research is QRS complex detection. Detecting correct QRS complex or R peak is important since it is used to measure several other ECG metrics. One of the robust methods for QRS detection is Zero Cross method. This method uses an addition of high-frequency signal and zero crossing count to detect QRS complex which has a low-frequency oscillation. This paper presents an application of QRS detection using Zero Cross algorithm in the Android-based system. The performance of the algorithm in the mobile environment is measured. The result shows that this method is suitable for real-time QRS detection in a mobile application.

  6. Multivariable control of a twin lift helicopter system using the LQG/LTR design methodology

    NASA Technical Reports Server (NTRS)

    Rodriguez, A. A.; Athans, M.

    1986-01-01

    Guidelines for developing a multivariable centralized automatic flight control system (AFCS) for a twin lift helicopter system (TLHS) are presented. Singular value ideas are used to formulate performance and stability robustness specifications. A linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) design is obtained and evaluated.

  7. Automatic segmentation of multimodal brain tumor images based on classification of super-voxels.

    PubMed

    Kadkhodaei, M; Samavi, S; Karimi, N; Mohaghegh, H; Soroushmehr, S M R; Ward, K; All, A; Najarian, K

    2016-08-01

    Despite the rapid growth in brain tumor segmentation approaches, there are still many challenges in this field. Automatic segmentation of brain images has a critical role in decreasing the burden of manual labeling and increasing robustness of brain tumor diagnosis. We consider segmentation of glioma tumors, which have a wide variation in size, shape and appearance properties. In this paper images are enhanced and normalized to same scale in a preprocessing step. The enhanced images are then segmented based on their intensities using 3D super-voxels. Usually in images a tumor region can be regarded as a salient object. Inspired by this observation, we propose a new feature which uses a saliency detection algorithm. An edge-aware filtering technique is employed to align edges of the original image to the saliency map which enhances the boundaries of the tumor. Then, for classification of tumors in brain images, a set of robust texture features are extracted from super-voxels. Experimental results indicate that our proposed method outperforms a comparable state-of-the-art algorithm in term of dice score.

  8. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis.

    PubMed

    Zhao, Liya; Jia, Kebin

    2016-01-01

    Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  9. Robust Cell Detection of Histopathological Brain Tumor Images Using Sparse Reconstruction and Adaptive Dictionary Selection

    PubMed Central

    Su, Hai; Xing, Fuyong; Yang, Lin

    2016-01-01

    Successful diagnostic and prognostic stratification, treatment outcome prediction, and therapy planning depend on reproducible and accurate pathology analysis. Computer aided diagnosis (CAD) is a useful tool to help doctors make better decisions in cancer diagnosis and treatment. Accurate cell detection is often an essential prerequisite for subsequent cellular analysis. The major challenge of robust brain tumor nuclei/cell detection is to handle significant variations in cell appearance and to split touching cells. In this paper, we present an automatic cell detection framework using sparse reconstruction and adaptive dictionary learning. The main contributions of our method are: 1) A sparse reconstruction based approach to split touching cells; 2) An adaptive dictionary learning method used to handle cell appearance variations. The proposed method has been extensively tested on a data set with more than 2000 cells extracted from 32 whole slide scanned images. The automatic cell detection results are compared with the manually annotated ground truth and other state-of-the-art cell detection algorithms. The proposed method achieves the best cell detection accuracy with a F1 score = 0.96. PMID:26812706

  10. A Kalman-Filter-Based Common Algorithm Approach for Object Detection in Surgery Scene to Assist Surgeon's Situation Awareness in Robot-Assisted Laparoscopic Surgery

    PubMed Central

    2018-01-01

    Although the use of the surgical robot is rapidly expanding for various medical treatments, there still exist safety issues and concerns about robot-assisted surgeries due to limited vision through a laparoscope, which may cause compromised situation awareness and surgical errors requiring rapid emergency conversion to open surgery. To assist surgeon's situation awareness and preventive emergency response, this study proposes situation information guidance through a vision-based common algorithm architecture for automatic detection and tracking of intraoperative hemorrhage and surgical instruments. The proposed common architecture comprises the location of the object of interest using feature texture, morphological information, and the tracking of the object based on Kalman filter for robustness with reduced error. The average recall and precision of the instrument detection in four prostate surgery videos were 96% and 86%, and the accuracy of the hemorrhage detection in two prostate surgery videos was 98%. Results demonstrate the robustness of the automatic intraoperative object detection and tracking which can be used to enhance the surgeon's preventive state recognition during robot-assisted surgery. PMID:29854366

  11. Low-Power Photoplethysmogram Acquisition Integrated Circuit with Robust Light Interference Compensation.

    PubMed

    Kim, Jongpal; Kim, Jihoon; Ko, Hyoungho

    2015-12-31

    To overcome light interference, including a large DC offset and ambient light variation, a robust photoplethysmogram (PPG) readout chip is fabricated using a 0.13-μm complementary metal-oxide-semiconductor (CMOS) process. Against the large DC offset, a saturation detection and current feedback circuit is proposed to compensate for an offset current of up to 30 μA. For robustness against optical path variation, an automatic emitted light compensation method is adopted. To prevent ambient light interference, an alternating sampling and charge redistribution technique is also proposed. In the proposed technique, no additional power is consumed, and only three differential switches and one capacitor are required. The PPG readout channel consumes 26.4 μW and has an input referred current noise of 260 pArms.

  12. Low-Power Photoplethysmogram Acquisition Integrated Circuit with Robust Light Interference Compensation

    PubMed Central

    Kim, Jongpal; Kim, Jihoon; Ko, Hyoungho

    2015-01-01

    To overcome light interference, including a large DC offset and ambient light variation, a robust photoplethysmogram (PPG) readout chip is fabricated using a 0.13-μm complementary metal–oxide–semiconductor (CMOS) process. Against the large DC offset, a saturation detection and current feedback circuit is proposed to compensate for an offset current of up to 30 μA. For robustness against optical path variation, an automatic emitted light compensation method is adopted. To prevent ambient light interference, an alternating sampling and charge redistribution technique is also proposed. In the proposed technique, no additional power is consumed, and only three differential switches and one capacitor are required. The PPG readout channel consumes 26.4 μW and has an input referred current noise of 260 pArms. PMID:26729122

  13. Generic and robust method for automatic segmentation of PET images using an active contour model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuang, Mingzan

    Purpose: Although positron emission tomography (PET) images have shown potential to improve the accuracy of targeting in radiation therapy planning and assessment of response to treatment, the boundaries of tumors are not easily distinguishable from surrounding normal tissue owing to the low spatial resolution and inherent noisy characteristics of PET images. The objective of this study is to develop a generic and robust method for automatic delineation of tumor volumes using an active contour model and to evaluate its performance using phantom and clinical studies. Methods: MASAC, a method for automatic segmentation using an active contour model, incorporates the histogrammore » fuzzy C-means clustering, and localized and textural information to constrain the active contour to detect boundaries in an accurate and robust manner. Moreover, the lattice Boltzmann method is used as an alternative approach for solving the level set equation to make it faster and suitable for parallel programming. Twenty simulated phantom studies and 16 clinical studies, including six cases of pharyngolaryngeal squamous cell carcinoma and ten cases of nonsmall cell lung cancer, were included to evaluate its performance. Besides, the proposed method was also compared with the contourlet-based active contour algorithm (CAC) and Schaefer’s thresholding method (ST). The relative volume error (RE), Dice similarity coefficient (DSC), and classification error (CE) metrics were used to analyze the results quantitatively. Results: For the simulated phantom studies (PSs), MASAC and CAC provide similar segmentations of the different lesions, while ST fails to achieve reliable results. For the clinical datasets (2 cases with connected high-uptake regions excluded) (CSs), CAC provides for the lowest mean RE (−8.38% ± 27.49%), while MASAC achieves the best mean DSC (0.71 ± 0.09) and mean CE (53.92% ± 12.65%), respectively. MASAC could reliably quantify different types of lesions assessed in this work with good accuracy, resulting in a mean RE of −13.35% ± 11.87% and −11.15% ± 23.66%, a mean DSC of 0.89 ± 0.05 and 0.71 ± 0.09, and a mean CE of 19.19% ± 7.89% and 53.92% ± 12.65%, for PSs and CSs, respectively. Conclusions: The authors’ results demonstrate that the developed novel PET segmentation algorithm is applicable to various types of lesions in the authors’ study and is capable of producing accurate and consistent target volume delineations, potentially resulting in reduced intraobserver and interobserver variabilities observed when using manual delineation and improved accuracy in treatment planning and outcome evaluation.« less

  14. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    PubMed

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R software is available at https://github.com/angy89/RobustSparseCorrelation. aserra@unisa.it or robtag@unisa.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. A novel knowledge-based system for interpreting complex engineering drawings: theory, representation, and implementation.

    PubMed

    Lu, Tong; Tai, Chiew-Lan; Yang, Huafei; Cai, Shijie

    2009-08-01

    We present a novel knowledge-based system to automatically convert real-life engineering drawings to content-oriented high-level descriptions. The proposed method essentially turns the complex interpretation process into two parts: knowledge representation and knowledge-based interpretation. We propose a new hierarchical descriptor-based knowledge representation method to organize the various types of engineering objects and their complex high-level relations. The descriptors are defined using an Extended Backus Naur Form (EBNF), facilitating modification and maintenance. When interpreting a set of related engineering drawings, the knowledge-based interpretation system first constructs an EBNF-tree from the knowledge representation file, then searches for potential engineering objects guided by a depth-first order of the nodes in the EBNF-tree. Experimental results and comparisons with other interpretation systems demonstrate that our knowledge-based system is accurate and robust for high-level interpretation of complex real-life engineering projects.

  16. Automatic macroscopic characterization of diesel sprays by means of a new image processing algorithm

    NASA Astrophysics Data System (ADS)

    Rubio-Gómez, Guillermo; Martínez-Martínez, S.; Rua-Mojica, Luis F.; Gómez-Gordo, Pablo; de la Garza, Oscar A.

    2018-05-01

    A novel algorithm is proposed for the automatic segmentation of diesel spray images and the calculation of their macroscopic parameters. The algorithm automatically detects each spray present in an image, and therefore it is able to work with diesel injectors with a different number of nozzle holes without any modification. The main characteristic of the algorithm is that it splits each spray into three different regions and then segments each one with an individually calculated binarization threshold. Each threshold level is calculated from the analysis of a representative luminosity profile of each region. This approach makes it robust to irregular light distribution along a single spray and between different sprays of an image. Once the sprays are segmented, the macroscopic parameters of each one are calculated. The algorithm is tested with two sets of diesel spray images taken under normal and irregular illumination setups.

  17. Automatic detection of small surface targets with electro-optical sensors in a harbor environment

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; de Lange, Dirk-Jan J.; van den Broek, Sebastiaan P.; Kemp, Rob A. W.; Schwering, Piet B. W.

    2008-10-01

    In modern warfare scenarios naval ships must operate in coastal environments. These complex environments, in bays and narrow straits, with cluttered littoral backgrounds and many civilian ships may contain asymmetric threats of fast targets, such as rhibs, cabin boats and jet-skis. Optical sensors, in combination with image enhancement and automatic detection, assist an operator to reduce the response time, which is crucial for the protection of the naval and land-based supporting forces. In this paper, we present our work on automatic detection of small surface targets which includes multi-scale horizon detection and robust estimation of the background intensity. To evaluate the performance of our detection technology, data was recorded with both infrared and visual-light cameras in a coastal zone and in a harbor environment. During these trials multiple small targets were used. Results of this evaluation are shown in this paper.

  18. Automatic draft reading based on image processing

    NASA Astrophysics Data System (ADS)

    Tsujii, Takahiro; Yoshida, Hiromi; Iiguni, Youji

    2016-10-01

    In marine transportation, a draft survey is a means to determine the quantity of bulk cargo. Automatic draft reading based on computer image processing has been proposed. However, the conventional draft mark segmentation may fail when the video sequence has many other regions than draft marks and a hull, and the estimated waterline is inherently higher than the true one. To solve these problems, we propose an automatic draft reading method that uses morphological operations to detect draft marks and estimate the waterline for every frame with Canny edge detection and a robust estimation. Moreover, we emulate surveyors' draft reading process for getting the understanding of a shipper and a receiver. In an experiment in a towing tank, the draft reading error of the proposed method was <1 cm, showing the advantage of the proposed method. It is also shown that accurate draft reading has been achieved in a real-world scene.

  19. Quantifying the robustness of [18F]FDG-PET/CT radiomic features with respect to tumor delineation in head and neck and pancreatic cancer patients.

    PubMed

    Belli, Maria Luisa; Mori, Martina; Broggi, Sara; Cattaneo, Giovanni Mauro; Bettinardi, Valentino; Dell'Oca, Italo; Fallanca, Federico; Passoni, Paolo; Vanoli, Emilia Giovanna; Calandrino, Riccardo; Di Muzio, Nadia; Picchio, Maria; Fiorino, Claudio

    2018-05-01

    To investigate the robustness of PET radiomic features (RF) against tumour delineation uncertainty in two clinically relevant situations. Twenty-five head-and-neck (HN) and 25 pancreatic cancer patients previously treated with 18 F-Fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT)-based planning optimization were considered. Seven FDG-based contours were delineated for tumour (T) and positive lymph nodes (N, for HN patients only) following manual (2 observers), semi-automatic (based on SUV maximum gradient: PET_Edge) and automatic (40%, 50%, 60%, 70% SUV_max thresholds) methods. Seventy-three RF (14 of first order and 59 of higher order) were extracted using the CGITA software (v.1.4). The impact of delineation on volume agreement and RF was assessed by DICE and Intra-class Correlation Coefficients (ICC). A large disagreement between manual and SUV_max method was found for thresholds  ≥50%. Inter-observer variability showed median DICE values between 0.81 (HN-T) and 0.73 (pancreas). Volumes defined by PET_Edge were better consistent with the manual ones compared to SUV40%. Regarding RF, 19%/19%/47% of the features showed ICC < 0.80 between observers for HN-N/HN-T/pancreas, mostly in the Voxel-alignment matrix and in the intensity-size zone matrix families. RFs with ICC < 0.80 against manual delineation (taking the worst value) increased to 44%/36%/61% for PET_Edge and to 69%/53%/75% for SUV40%. About 80%/50% of 72 RF were consistent between observers for HN/pancreas patients. PET_edge was sufficiently robust against manual delineation while SUV40% showed a worse performance. This result suggests the possibility to replace manual with semi-automatic delineation of HN and pancreas tumours in studies including PET radiomic analyses. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Automatic detection of Parkinson's disease in running speech spoken in three different languages.

    PubMed

    Orozco-Arroyave, J R; Hönig, F; Arias-Londoño, J D; Vargas-Bonilla, J F; Daqrouq, K; Skodda, S; Rusz, J; Nöth, E

    2016-01-01

    The aim of this study is the analysis of continuous speech signals of people with Parkinson's disease (PD) considering recordings in different languages (Spanish, German, and Czech). A method for the characterization of the speech signals, based on the automatic segmentation of utterances into voiced and unvoiced frames, is addressed here. The energy content of the unvoiced sounds is modeled using 12 Mel-frequency cepstral coefficients and 25 bands scaled according to the Bark scale. Four speech tasks comprising isolated words, rapid repetition of the syllables /pa/-/ta/-/ka/, sentences, and read texts are evaluated. The method proves to be more accurate than classical approaches in the automatic classification of speech of people with PD and healthy controls. The accuracies range from 85% to 99% depending on the language and the speech task. Cross-language experiments are also performed confirming the robustness and generalization capability of the method, with accuracies ranging from 60% to 99%. This work comprises a step forward for the development of computer aided tools for the automatic assessment of dysarthric speech signals in multiple languages.

  1. Complex-valued Multidirectional Associative Memory

    NASA Astrophysics Data System (ADS)

    Kobayashi, Masaki; Yamazaki, Haruaki

    Hopfield model is a representative associative memory. It was improved to Bidirectional Associative Memory(BAM) by Kosko and Multidirectional Associative Memory(MAM) by Hagiwara. They have two layers or multilayers. Since they have symmetric connections between layers, they ensure to converge. MAM can deal with multiples of many patterns, such as (x1, x2,…), where xm is the pattern on layer-m. Noest, Hirose and Nemoto proposed complex-valued Hopfield model. Lee proposed complex-valued Bidirectional Associative Memory. Zemel proved the rotation invariance of complex-valued Hopfield model. It means that the rotated pattern also stored. In this paper, the complex-valued Multidirectional Associative Memory is proposed. The rotation invariance is also proved. Moreover it is shown by computer simulation that the differences of angles of given patterns are automatically reduced. At first we define complex-valued Multidirectional Associative Memory. Then we define the energy function of network. By using energy function, we prove that the network ensures to converge. Next, we define the learning law and show the characteristic of recall process. The characteristic means that the differences of angles of given patterns are automatically reduced. Especially we prove the following theorem. In case that only a multiple of patterns is stored, if patterns with different angles are given to each layer, the differences are automatically reduced. Finally, we invest that the differences of angles influence the noise robustness. It reduce the noise robustness, because input to each layer become small. We show that by computer simulations.

  2. Patient-specific and global convolutional neural networks for robust automatic liver tumor delineation in follow-up CT studies.

    PubMed

    Vivanti, Refael; Joskowicz, Leo; Lev-Cohain, Naama; Ephrat, Ariel; Sosna, Jacob

    2018-03-10

    Radiological longitudinal follow-up of tumors in CT scans is essential for disease assessment and liver tumor therapy. Currently, most tumor size measurements follow the RECIST guidelines, which can be off by as much as 50%. True volumetric measurements are more accurate but require manual delineation, which is time-consuming and user-dependent. We present a convolutional neural networks (CNN) based method for robust automatic liver tumor delineation in longitudinal CT studies that uses both global and patient specific CNNs trained on a small database of delineated images. The inputs are the baseline scan and the tumor delineation, a follow-up scan, and a liver tumor global CNN voxel classifier built from radiologist-validated liver tumor delineations. The outputs are the tumor delineations in the follow-up CT scan. The baseline scan tumor delineation serves as a high-quality prior for the tumor characterization in the follow-up scans. It is used to evaluate the global CNN performance on the new case and to reliably predict failures of the global CNN on the follow-up scan. High-scoring cases are segmented with a global CNN; low-scoring cases, which are predicted to be failures of the global CNN, are segmented with a patient-specific CNN built from the baseline scan. Our experimental results on 222 tumors from 31 patients yield an average overlap error of 17% (std = 11.2) and surface distance of 2.1 mm (std = 1.8), far better than stand-alone segmentation. Importantly, the robustness of our method improved from 67% for stand-alone global CNN segmentation to 100%. Unlike other medical imaging deep learning approaches, which require large annotated training datasets, our method exploits the follow-up framework to yield accurate tumor tracking and failure detection and correction with a small training dataset. Graphical abstract Flow diagram of the proposed method. In the offline mode (orange), a global CNN is trained as a voxel classifier to segment liver tumor as in [31]. The online mode (blue) is used for each new case. The input is baseline scan with delineation and the follow-up CT scan to be segmented. The main novelty is the ability to predict failures by trying the system on the baseline scan and the ability to correct them using the patient-specific CNN.

  3. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    PubMed

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We developed a novel de-arraying approach for TMA analysis. By combining wavelet-based detection, active contour segmentation, and thin-plate spline interpolation, our approach is able to handle TMA images with high dynamic, poor signal-to-noise ratio, complex background and non-linear deformation of TMA grid. In addition, the deformation estimation produces quantitative information to asset the manufacturing quality of TMAs.

  4. Auto-tracking system for human lumbar motion analysis.

    PubMed

    Sui, Fuge; Zhang, Da; Lam, Shing Chun Benny; Zhao, Lifeng; Wang, Dongjun; Bi, Zhenggang; Hu, Yong

    2011-01-01

    Previous lumbar motion analyses suggest the usefulness of quantitatively characterizing spine motion. However, the application of such measurements is still limited by the lack of user-friendly automatic spine motion analysis systems. This paper describes an automatic analysis system to measure lumbar spine disorders that consists of a spine motion guidance device, an X-ray imaging modality to acquire digitized video fluoroscopy (DVF) sequences and an automated tracking module with a graphical user interface (GUI). DVF sequences of the lumbar spine are recorded during flexion-extension under a guidance device. The automatic tracking software utilizing a particle filter locates the vertebra-of-interest in every frame of the sequence, and the tracking result is displayed on the GUI. Kinematic parameters are also extracted from the tracking results for motion analysis. We observed that, in a bone model test, the maximum fiducial error was 3.7%, and the maximum repeatability error in translation and rotation was 1.2% and 2.6%, respectively. In our simulated DVF sequence study, the automatic tracking was not successful when the noise intensity was greater than 0.50. In a noisy situation, the maximal difference was 1.3 mm in translation and 1° in the rotation angle. The errors were calculated in translation (fiducial error: 2.4%, repeatability error: 0.5%) and in the rotation angle (fiducial error: 1.0%, repeatability error: 0.7%). However, the automatic tracking software could successfully track simulated sequences contaminated by noise at a density ≤ 0.5 with very high accuracy, providing good reliability and robustness. A clinical trial with 10 healthy subjects and 2 lumbar spondylolisthesis patients were enrolled in this study. The measurement with auto-tacking of DVF provided some information not seen in the conventional X-ray. The results proposed the potential use of the proposed system for clinical applications.

  5. Acoustic emission source location in complex structures using full automatic delta T mapping technique

    NASA Astrophysics Data System (ADS)

    Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys

    2016-05-01

    An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.

  6. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    PubMed

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  7. Evaluation of an Enhanced Stimulus-Stimulus Pairing Procedure to Increase Early Vocalizations of Children with Autism

    ERIC Educational Resources Information Center

    Esch, Barbara E.; Carr, James E.; Grow, Laura L.

    2009-01-01

    Evidence to support stimulus-stimulus pairing (SSP) in speech acquisition is less than robust, calling into question the ability of SSP to reliably establish automatically reinforcing properties of speech and limiting the procedure's clinical utility for increasing vocalizations. We evaluated the effects of a modified SSP procedure on…

  8. Contingency Software in Autonomous Systems: Technical Level Briefing

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.; Patterson-Hines, Ann

    2006-01-01

    Contingency management is essential to the robust operation of complex systems such as spacecraft and Unpiloted Aerial Vehicles (UAVs). Automatic contingency handling allows a faster response to unsafe scenarios with reduced human intervention on low-cost and extended missions. Results, applied to the Autonomous Rotorcraft Project and Mars Science Lab, pave the way to more resilient autonomous systems.

  9. Optimized swimmer tracking system based on a novel multi-related-targets approach

    NASA Astrophysics Data System (ADS)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2017-02-01

    Robust tracking is a crucial step in automatic swimmer evaluation from video sequences. We designed a robust swimmer tracking system using a new multi-related-targets approach. The main idea is to consider the swimmer as a bloc of connected subtargets that advance at the same speed. If one of the subtargets is partially or totally occluded, it can be localized by knowing the position of the others. In this paper, we first introduce the two-dimensional direct linear transformation technique that we used to calibrate the videos. Then, we present the classical tracking approach based on dynamic fusion. Next, we highlight the main contribution of our work, which is the multi-related-targets tracking approach. This approach, the classical head-only approach and the ground truth are then compared, through testing on a database of high-level swimmers in training, national and international competitions (French National Championships, Limoges 2015, and World Championships, Kazan 2015). Tracking percentage and the accuracy of the instantaneous speed are evaluated and the findings show that our new appraoach is significantly more accurate than the classical approach.

  10. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    PubMed

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  11. Automatic remote monitoring utilizing daily transmissions: transmission reliability and implantable cardioverter defibrillator battery longevity in the TRUST trial.

    PubMed

    Varma, Niraj; Love, Charles J; Schweikert, Robert; Moll, Philip; Michalski, Justin; Epstein, Andrew E

    2018-04-01

    Benefits of automatic remote home monitoring (HM) among implantable cardioverter defibrillator (ICD) patients may require high transmission frequency. However, transmission reliability and effects on battery longevity remain uncertain. We hypothesized that HM would have high transmission success permitting punctual guideline based follow-up, and improve battery longevity. This was tested in the prospective randomized TRUST trial. Implantable cardioverter defibrillator patients were randomized post-implant 2:1 to HM (n = 908) (transmit daily) or to Conventional in-person monitoring [conventional management (CM), n = 431 (HM disabled)]. In both groups, five evaluations were scheduled every 3 months for 15 months. Home Monitoring technology performance was assessed by transmissions received vs. total possible, and number of scheduled HM checks failing because of missed transmissions. Battery longevity was compared in HM vs. CM at 15 months, and again in HM 3 years post-implant using continuously transmitted data. Transmission success per patient was 91% (median follow-up of 434 days). Overall, daily HM transmissions were received in 315 795 of a potential 363 450 days (87%). Only 55/3759 (1.46%) of unsuccessful scheduled evaluations in HM were attributed to transmission loss. Shock frequency and pacing percentage were similar in HM vs. CM. Fifteen month battery longevity was 12% greater in HM (93.2 ± 8.8% vs. 83.5 ± 6.0% CM, P < 0.001). In extended follow-up of HM patients, estimated battery longevity was 50.9 ± 9.1% (median 52%) at 36 months. Automatic remote HM demonstrated robust transmission reliability. Daily transmission load may be sustained without reducing battery longevity. Home Monitoring conserves battery longevity and tracks long term device performance. ClinicalTrials.gov; NCT00336284.

  12. An efficient robust sound classification algorithm for hearing aids.

    PubMed

    Nordqvist, Peter; Leijon, Arne

    2004-06-01

    An efficient robust sound classification algorithm based on hidden Markov models is presented. The system would enable a hearing aid to automatically change its behavior for differing listening environments according to the user's preferences. This work attempts to distinguish between three listening environment categories: speech in traffic noise, speech in babble, and clean speech, regardless of the signal-to-noise ratio. The classifier uses only the modulation characteristics of the signal. The classifier ignores the absolute sound pressure level and the absolute spectrum shape, resulting in an algorithm that is robust against irrelevant acoustic variations. The measured classification hit rate was 96.7%-99.5% when the classifier was tested with sounds representing one of the three environment categories included in the classifier. False-alarm rates were 0.2%-1.7% in these tests. The algorithm is robust and efficient and consumes a small amount of instructions and memory. It is fully possible to implement the classifier in a DSP-based hearing instrument.

  13. Morphological change in machines accelerates the evolution of robust behavior

    PubMed Central

    Bongard, Josh

    2011-01-01

    Most animals exhibit significant neurological and morphological change throughout their lifetime. No robots to date, however, grow new morphological structure while behaving. This is due to technological limitations but also because it is unclear that morphological change provides a benefit to the acquisition of robust behavior in machines. Here I show that in evolving populations of simulated robots, if robots grow from anguilliform into legged robots during their lifetime in the early stages of evolution, and the anguilliform body plan is gradually lost during later stages of evolution, gaits are evolved for the final, legged form of the robot more rapidly—and the evolved gaits are more robust—compared to evolving populations of legged robots that do not transition through the anguilliform body plan. This suggests that morphological change, as well as the evolution of development, are two important processes that improve the automatic generation of robust behaviors for machines. It also provides an experimental platform for investigating the relationship between the evolution of development and robust behavior in biological organisms. PMID:21220304

  14. A dynamically adaptive multigrid algorithm for the incompressible Navier-Stokes equations: Validation and model problems

    NASA Technical Reports Server (NTRS)

    Thompson, C. P.; Leaf, G. K.; Vanrosendale, J.

    1991-01-01

    An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction.

  15. Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline

    NASA Technical Reports Server (NTRS)

    Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor

    2010-01-01

    Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.

  16. Automatic picker of P & S first arrivals and robust event locator

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Polozov, A.; Hofstetter, A.

    2003-12-01

    We report on further development of automatic all distances location procedure designed for a regional network. The procedure generalizes the previous "loca l" (R < 500 km) and "regional" (500 < R < 2000 km) routines and comprises: a) preliminary data processing (filtering and de-spiking), b) phase identificatio n, c) P, S first arrival picking, d) preliminary location and e) robust grid-search optimization procedure. Innovations concern phase identification, automa tic picking and teleseismic location. A platform free flexible Java interface was recently created, allowing easy parameter tuning and on/off switching to t he full-scale manual picking mode. Identification of the regional P and S phase is provided by choosing between the two largest peaks in the envelope curve. For automatic on-time estimation we utilize now ratio of two STAs, calculated in two consecutive and equal time windows (instead of previously used Akike Information Criterion). "Teleseismic " location is split in two stages: preliminary and final one. The preliminary part estimates azimuth and apparent velocity by fitting a plane wave to the P automatic pickings. The apparent velocity criterion is used to decide about strategy of the following computations: teleseismic or regional. The preliminary estimates of azimuth and apparent velocity provide starting value for the final teleseismic and regional location. Apparent velocity is used to get first a pproximation distance to the source on the basis of the P, Pn, Pg travel-timetables. The distance estimate together with the preliminary azimuth estimate provides first approximations of the source latitude and longitude via sine and cosine theorems formulated for the spherical triangle. Final location is based on robust grid-search optimization procedure, weighting the number of pickings that simultaneously fit the model travel times. The grid covers initial location and becomes finer while approaching true hypocenter. The target function is a sum of the bell-shaped characteristic functions, used to emphasize true pickings and eliminate outliers. The final solution is a grid point that provides maximum to the target function. The procedure was applied to a list of ML > 4 earthquakes recorded by the Israel Seismic Network (ISN) in the 1999-2002 time period. Most of them are badly constrained relative the network. However, the results of location with average normalized error relative bulletin solutions e=dr/R of 5% were obtained, in each of the distance ranges. The first version of the procedure was incorporated in the national Early Warning System in 2001. Recently, we started to send automatic Early Warn ing reports, to the EMSC Real Time Bulletin. Initially reported some teleseismic location discrepancies have been eliminated by introduction of station corrections.

  17. Control concepts for the alleviation of windshears and gusts

    NASA Technical Reports Server (NTRS)

    Rynaski, E. G.; Govindaraj, K. S.

    1982-01-01

    Automatic control system design methods for gust and shear alleviation were studied. It is shown that automatic gust/shear alleviation systems can be quite effective if both throttle and elevator are used in harmony to produce the forces and moments required to counter the effects of the windshear. Regulation with respect to ground speed or airspeed results in very similar system designs. The application of the NASA total energy probe in the detection of windshear and criteria for alleviation is considered. The theory and application of robust output observers is extended. Design examples show how implementation of the control laws can be accomplished using observers, and thereby resulting in less complex control system configurations.

  18. Designed tools for analysis of lithography patterns and nanostructures

    NASA Astrophysics Data System (ADS)

    Dervillé, Alexandre; Baderot, Julien; Bernard, Guilhem; Foucher, Johann; Grönqvist, Hanna; Labrosse, Aurélien; Martinez, Sergio; Zimmermann, Yann

    2017-03-01

    We introduce a set of designed tools for the analysis of lithography patterns and nano structures. The classical metrological analysis of these objects has the drawbacks of being time consuming, requiring manual tuning and lacking robustness and user friendliness. With the goal of improving the current situation, we propose new image processing tools at different levels: semi automatic, automatic and machine-learning enhanced tools. The complete set of tools has been integrated into a software platform designed to transform the lab into a virtual fab. The underlying idea is to master nano processes at the research and development level by accelerating the access to knowledge and hence speed up the implementation in product lines.

  19. Robust autofocus algorithm for ISAR imaging of moving targets

    NASA Astrophysics Data System (ADS)

    Li, Jian; Wu, Renbiao; Chen, Victor C.

    2000-08-01

    A robust autofocus approach, referred to as AUTOCLEAN (AUTOfocus via CLEAN), is proposed for the motion compensation in ISAR (inverse synthetic aperture radar) imaging of moving targets. It is a parametric algorithm based on a very flexible data model which takes into account arbitrary range migration and arbitrary phase errors across the synthetic aperture that may be induced by unwanted radial motion of the target as well as propagation or system instability. AUTOCLEAN can be classified as a multiple scatterer algorithm (MSA), but it differs considerably from other existing MSAs in several aspects: (1) dominant scatterers are selected automatically in the two-dimensional (2-D) image domain; (2) scatterers may not be well-isolated or very dominant; (3) phase and RCS (radar cross section) information from each selected scatterer are combined in an optimal way; (4) the troublesome phase unwrapping step is avoided. AUTOCLEAN is computationally efficient and involves only a sequence of FFTs (fast Fourier Transforms). Another good feature associated with AUTOCLEAN is that its performance can be progressively improved by assuming a larger number of dominant scatterers for the target. Hence it can be easily configured for real-time applications including, for example, ATR (automatic target recognition) of non-cooperative moving targets, and for some other applications where the image quality is of the major concern but not the computational time including, for example, for the development and maintenance of low observable aircrafts. Numerical and experimental results have shown that AUTOCLEAN is a very robust autofocus tool for ISAR imaging.

  20. The emotional impact of being myself: Emotions and foreign-language processing.

    PubMed

    Ivaz, Lela; Costa, Albert; Duñabeitia, Jon Andoni

    2016-03-01

    Native languages are acquired in emotionally rich contexts, whereas foreign languages are typically acquired in emotionally neutral academic environments. As a consequence of this difference, it has been suggested that bilinguals' emotional reactivity in foreign-language contexts is reduced as compared with native language contexts. In the current study, we investigated whether this emotional distance associated with foreign languages could modulate automatic responses to self-related linguistic stimuli. Self-related stimuli enhance performance by boosting memory, speed, and accuracy as compared with stimuli unrelated to the self (the so-called self-bias effect). We explored whether this effect depends on the language context by comparing self-biases in a native and a foreign language. Two experiments were conducted with native Spanish speakers with a high level of English proficiency in which they were asked to complete a perceptual matching task during which they associated simple geometric shapes (circles, squares, and triangles) with the labels "you," "friend," and "other" either in their native or foreign language. Results showed a robust asymmetry in the self-bias in the native- and foreign-language contexts: A larger self-bias was found in the native than in the foreign language. An additional control experiment demonstrated that the same materials administered to a group of native English speakers yielded robust self-bias effects that were comparable in magnitude to the ones obtained with the Spanish speakers when tested in their native language (but not in their foreign language). We suggest that the emotional distance evoked by the foreign-language contexts caused these differential effects across language contexts. These results demonstrate that the foreign-language effects are pervasive enough to affect automatic stages of emotional processing. (c) 2016 APA, all rights reserved).

  1. 14CO2 processing using an improved and robust molecular sieve cartridge

    NASA Astrophysics Data System (ADS)

    Wotte, Anja; Wordell-Dietrich, Patrick; Wacker, Lukas; Don, Axel; Rethemeyer, Janet

    2017-06-01

    Radiocarbon (14C) analysis on CO2 can provide valuable information on the carbon cycle as different carbon pools differ in their 14C signature. While fresh, biogenic carbon shows atmospheric 14C concentrations, fossil carbon is 14C free. As shown in previous studies, CO2 can be collected for 14C analysis using molecular sieve cartridges (MSC). These devices have previously been made of plastic and glass, which can easily be damaged during transport. We thus constructed a robust MSC suitable for field application under tough conditions or in remote areas, which is entirely made of stainless steel. The new MSC should also be tight over several months to allow long sampling campaigns and transport times, which was proven by a one year storage test. The reliability of the 14CO2 results obtained with the MSC was evaluated by detailed tests of different procedures to clean the molecular sieve (zeolite type 13X) and for the adsorption and desorption of CO2 from the zeolite using a vacuum rig. We show that the 14CO2 results are not affected by any contamination of modern or fossil origin, cross contamination from previous samples, and by carbon isotopic fractionation. In addition, we evaluated the direct CO2 transfer from the MSC into the automatic graphitization equipment AGE with the subsequent 14C AMS analysis as graphite. This semi-automatic approach can be fully automated in the future, which would allow a high sample throughput. We obtained very promising, low blank values between 0.0018 and 0.0028 F14C (equivalent to 50,800 and 47,200 yrs BP), which are within the analytical background and lower than results obtained in previous studies.

  2. Automatic streak endpoint localization from the cornerness metric

    NASA Astrophysics Data System (ADS)

    Sease, Brad; Flewelling, Brien; Black, Jonathan

    2017-05-01

    Streaked point sources are a common occurrence when imaging unresolved space objects from both ground- and space-based platforms. Effective localization of streak endpoints is a key component of traditional techniques in space situational awareness related to orbit estimation and attitude determination. To further that goal, this paper derives a general detection and localization method for streak endpoints based on the cornerness metric. Corners detection involves searching an image for strong bi-directional gradients. These locations typically correspond to robust structural features in an image. In the case of unresolved imagery, regions with a high cornerness score correspond directly to the endpoints of streaks. This paper explores three approaches for global extraction of streak endpoints and applies them to an attitude and rate estimation routine.

  3. Automated imaging system for single molecules

    DOEpatents

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  4. Genotyping in the cloud with Crossbow.

    PubMed

    Gurtowski, James; Schatz, Michael C; Langmead, Ben

    2012-09-01

    Crossbow is a scalable, portable, and automatic cloud computing tool for identifying SNPs from high-coverage, short-read resequencing data. It is built on Apache Hadoop, an implementation of the MapReduce software framework. Hadoop allows Crossbow to distribute read alignment and SNP calling subtasks over a cluster of commodity computers. Two robust tools, Bowtie and SOAPsnp, implement the fundamental alignment and variant calling operations respectively, and have demonstrated capabilities within Crossbow of analyzing approximately one billion short reads per hour on a commodity Hadoop cluster with 320 cores. Through protocol examples, this unit will demonstrate the use of Crossbow for identifying variations in three different operating modes: on a Hadoop cluster, on a single computer, and on the Amazon Elastic MapReduce cloud computing service.

  5. TnpPred: A Web Service for the Robust Prediction of Prokaryotic Transposases

    PubMed Central

    Riadi, Gonzalo; Medina-Moenne, Cristobal; Holmes, David S.

    2012-01-01

    Transposases (Tnps) are enzymes that participate in the movement of insertion sequences (ISs) within and between genomes. Genes that encode Tnps are amongst the most abundant and widely distributed genes in nature. However, they are difficult to predict bioinformatically and given the increasing availability of prokaryotic genomes and metagenomes, it is incumbent to develop rapid, high quality automatic annotation of ISs. This need prompted us to develop a web service, termed TnpPred for Tnp discovery. It provides better sensitivity and specificity for Tnp predictions than given by currently available programs as determined by ROC analysis. TnpPred should be useful for improving genome annotation. The TnpPred web service is freely available for noncommercial use. PMID:23251097

  6. A new Lagrangian method for three-dimensional steady supersonic flows

    NASA Technical Reports Server (NTRS)

    Loh, Ching-Yuen; Liou, Meng-Sing

    1993-01-01

    In this report, the new Lagrangian method introduced by Loh and Hui is extended for three-dimensional, steady supersonic flow computation. The derivation of the conservation form and the solution of the local Riemann solver using the Godunov and the high-resolution TVD (total variation diminished) scheme is presented. This new approach is accurate and robust, capable of handling complicated geometry and interactions between discontinuous waves. Test problems show that the extended Lagrangian method retains all the advantages of the two-dimensional method (e.g., crisp resolution of a slip-surface (contact discontinuity) and automatic grid generation). In this report, we also suggest a novel three dimensional Riemann problem in which interesting and intricate flow features are present.

  7. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features.

    PubMed

    Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin

    2008-11-01

    We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.

  8. On the feasibility of automatically selecting similar patients in highly individualized radiotherapy dose reconstruction for historic data of pediatric cancer survivors.

    PubMed

    Virgolin, Marco; van Dijk, Irma W E M; Wiersma, Jan; Ronckers, Cécile M; Witteveen, Cees; Bel, Arjan; Alderliesten, Tanja; Bosman, Peter A N

    2018-04-01

    The aim of this study is to establish the first step toward a novel and highly individualized three-dimensional (3D) dose distribution reconstruction method, based on CT scans and organ delineations of recently treated patients. Specifically, the feasibility of automatically selecting the CT scan of a recently treated childhood cancer patient who is similar to a given historically treated child who suffered from Wilms' tumor is assessed. A cohort of 37 recently treated children between 2- and 6-yr old are considered. Five potential notions of ground-truth similarity are proposed, each focusing on different anatomical aspects. These notions are automatically computed from CT scans of the abdomen and 3D organ delineations (liver, spleen, spinal cord, external body contour). The first is based on deformable image registration, the second on the Dice similarity coefficient, the third on the Hausdorff distance, the fourth on pairwise organ distances, and the last is computed by means of the overlap volume histogram. The relationship between typically available features of historically treated patients and the proposed ground-truth notions of similarity is studied by adopting state-of-the-art machine learning techniques, including random forest. Also, the feasibility of automatically selecting the most similar patient is assessed by comparing ground-truth rankings of similarity with predicted rankings. Similarities (mainly) based on the external abdomen shape and on the pairwise organ distances are highly correlated (Pearson r p ≥ 0.70) and are successfully modeled with random forests based on historically recorded features (pseudo-R 2 ≥ 0.69). In contrast, similarities based on the shape of internal organs cannot be modeled. For the similarities that random forest can reliably model, an estimation of feature relevance indicates that abdominal diameters and weight are the most important. Experiments on automatically selecting similar patients lead to coarse, yet quite robust results: the most similar patient is retrieved only 22% of the times, however, the error in worst-case scenarios is limited, with the fourth most similar patient being retrieved. Results demonstrate that automatically selecting similar patients is feasible when focusing on the shape of the external abdomen and on the position of internal organs. Moreover, whereas the common practice in phantom-based dose reconstruction is to select a representative phantom using age, height, and weight as discriminant factors for any treatment scenario, our analysis on abdominal tumor treatment for children shows that the most relevant features are weight and the anterior-posterior and left-right abdominal diameters. © 2018 American Association of Physicists in Medicine.

  9. Workflow oriented software support for image guided radiofrequency ablation of focal liver malignancies

    NASA Astrophysics Data System (ADS)

    Weihusen, Andreas; Ritter, Felix; Kröger, Tim; Preusser, Tobias; Zidowitz, Stephan; Peitgen, Heinz-Otto

    2007-03-01

    Image guided radiofrequency (RF) ablation has taken a significant part in the clinical routine as a minimally invasive method for the treatment of focal liver malignancies. Medical imaging is used in all parts of the clinical workflow of an RF ablation, incorporating treatment planning, interventional targeting and result assessment. This paper describes a software application, which has been designed to support the RF ablation workflow under consideration of the requirements of clinical routine, such as easy user interaction and a high degree of robust and fast automatic procedures, in order to keep the physician from spending too much time at the computer. The application therefore provides a collection of specialized image processing and visualization methods for treatment planning and result assessment. The algorithms are adapted to CT as well as to MR imaging. The planning support contains semi-automatic methods for the segmentation of liver tumors and the surrounding vascular system as well as an interactive virtual positioning of RF applicators and a concluding numerical estimation of the achievable heat distribution. The assessment of the ablation result is supported by the segmentation of the coagulative necrosis and an interactive registration of pre- and post-interventional image data for the comparison of tumor and necrosis segmentation masks. An automatic quantification of surface distances is performed to verify the embedding of the tumor area into the thermal lesion area. The visualization methods support representations in the commonly used orthogonal 2D view as well as in 3D scenes.

  10. Improving left ventricular segmentation in four-dimensional flow MRI using intramodality image registration for cardiac blood flow analysis.

    PubMed

    Gupta, Vikas; Bustamante, Mariana; Fredriksson, Alexandru; Carlhäll, Carl-Johan; Ebbers, Tino

    2018-01-01

    Assessment of blood flow in the left ventricle using four-dimensional flow MRI requires accurate left ventricle segmentation that is often hampered by the low contrast between blood and the myocardium. The purpose of this work is to improve left-ventricular segmentation in four-dimensional flow MRI for reliable blood flow analysis. The left ventricle segmentations are first obtained using morphological cine-MRI with better in-plane resolution and contrast, and then aligned to four-dimensional flow MRI data. This alignment is, however, not trivial due to inter-slice misalignment errors caused by patient motion and respiratory drift during breath-hold based cine-MRI acquisition. A robust image registration based framework is proposed to mitigate such errors automatically. Data from 20 subjects, including healthy volunteers and patients, was used to evaluate its geometric accuracy and impact on blood flow analysis. High spatial correspondence was observed between manually and automatically aligned segmentations, and the improvements in alignment compared to uncorrected segmentations were significant (P < 0.01). Blood flow analysis from manual and automatically corrected segmentations did not differ significantly (P > 0.05). Our results demonstrate the efficacy of the proposed approach in improving left-ventricular segmentation in four-dimensional flow MRI, and its potential for reliable blood flow analysis. Magn Reson Med 79:554-560, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Template-based automatic extraction of the joint space of foot bones from CT scan

    NASA Astrophysics Data System (ADS)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  12. Methods for automatic detection of artifacts in microelectrode recordings.

    PubMed

    Bakštein, Eduard; Sieger, Tomáš; Wild, Jiří; Novák, Daniel; Schneider, Jakub; Vostatek, Pavel; Urgošík, Dušan; Jech, Robert

    2017-10-01

    Extracellular microelectrode recording (MER) is a prominent technique for studies of extracellular single-unit neuronal activity. In order to achieve robust results in more complex analysis pipelines, it is necessary to have high quality input data with a low amount of artifacts. We show that noise (mainly electromagnetic interference and motion artifacts) may affect more than 25% of the recording length in a clinical MER database. We present several methods for automatic detection of noise in MER signals, based on (i) unsupervised detection of stationary segments, (ii) large peaks in the power spectral density, and (iii) a classifier based on multiple time- and frequency-domain features. We evaluate the proposed methods on a manually annotated database of 5735 ten-second MER signals from 58 Parkinson's disease patients. The existing methods for artifact detection in single-channel MER that have been rigorously tested, are based on unsupervised change-point detection. We show on an extensive real MER database that the presented techniques are better suited for the task of artifact identification and achieve much better results. The best-performing classifiers (bagging and decision tree) achieved artifact classification accuracy of up to 89% on an unseen test set and outperformed the unsupervised techniques by 5-10%. This was close to the level of agreement among raters using manual annotation (93.5%). We conclude that the proposed methods are suitable for automatic MER denoising and may help in the efficient elimination of undesirable signal artifacts. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Automatic diagnosis of imbalanced ophthalmic images using a cost-sensitive deep convolutional neural network.

    PubMed

    Jiang, Jiewei; Liu, Xiyang; Zhang, Kai; Long, Erping; Wang, Liming; Li, Wangting; Liu, Lin; Wang, Shuai; Zhu, Mingmin; Cui, Jiangtao; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Wang, Jinghui; Lin, Haotian

    2017-11-21

    Ocular images play an essential role in ophthalmological diagnoses. Having an imbalanced dataset is an inevitable issue in automated ocular diseases diagnosis; the scarcity of positive samples always tends to result in the misdiagnosis of severe patients during the classification task. Exploring an effective computer-aided diagnostic method to deal with imbalanced ophthalmological dataset is crucial. In this paper, we develop an effective cost-sensitive deep residual convolutional neural network (CS-ResCNN) classifier to diagnose ophthalmic diseases using retro-illumination images. First, the regions of interest (crystalline lens) are automatically identified via twice-applied Canny detection and Hough transformation. Then, the localized zones are fed into the CS-ResCNN to extract high-level features for subsequent use in automatic diagnosis. Second, the impacts of cost factors on the CS-ResCNN are further analyzed using a grid-search procedure to verify that our proposed system is robust and efficient. Qualitative analyses and quantitative experimental results demonstrate that our proposed method outperforms other conventional approaches and offers exceptional mean accuracy (92.24%), specificity (93.19%), sensitivity (89.66%) and AUC (97.11%) results. Moreover, the sensitivity of the CS-ResCNN is enhanced by over 13.6% compared to the native CNN method. Our study provides a practical strategy for addressing imbalanced ophthalmological datasets and has the potential to be applied to other medical images. The developed and deployed CS-ResCNN could serve as computer-aided diagnosis software for ophthalmologists in clinical application.

  14. Ontology-Based High-Level Context Inference for Human Behavior Identification

    PubMed Central

    Villalonga, Claudia; Razzaq, Muhammad Asif; Khan, Wajahat Ali; Pomares, Hector; Rojas, Ignacio; Lee, Sungyoung; Banos, Oresti

    2016-01-01

    Recent years have witnessed a huge progress in the automatic identification of individual primitives of human behavior, such as activities or locations. However, the complex nature of human behavior demands more abstract contextual information for its analysis. This work presents an ontology-based method that combines low-level primitives of behavior, namely activity, locations and emotions, unprecedented to date, to intelligently derive more meaningful high-level context information. The paper contributes with a new open ontology describing both low-level and high-level context information, as well as their relationships. Furthermore, a framework building on the developed ontology and reasoning models is presented and evaluated. The proposed method proves to be robust while identifying high-level contexts even in the event of erroneously-detected low-level contexts. Despite reasonable inference times being obtained for a relevant set of users and instances, additional work is required to scale to long-term scenarios with a large number of users. PMID:27690050

  15. Automatic Requirements Specification Extraction from Natural Language (ARSENAL)

    DTIC Science & Technology

    2014-10-01

    designers, implementers) involved in the design of software systems. However, natural language descriptions can be informal, incomplete, imprecise...communication of technical descriptions between the various stakeholders (e.g., customers, designers, imple- menters) involved in the design of software systems...the accuracy of the natural language processing stage, the degree of automation, and robustness to noise. 1 2 Introduction Software systems operate in

  16. Are Children's Memory Illusions Created Differently from Those of Adults? Evidence from Levels-of-Processing and Divided Attention Paradigms

    ERIC Educational Resources Information Center

    Wimmer, Marina C.; Howe, Mark L.

    2010-01-01

    In two experiments, we investigated the robustness and automaticity of adults' and children's generation of false memories by using a levels-of-processing paradigm (Experiment 1) and a divided attention paradigm (Experiment 2). The first experiment revealed that when information was encoded at a shallow level, true recognition rates decreased for…

  17. Combatting Inherent Vulnerabilities of CFAR Algorithms and a New Robust CFAR Design

    DTIC Science & Technology

    1993-09-01

    elements of any automatic radar system. Unfortunately, CFAR systems are inherently vulnerable to degradation caused by large clutter edges, multiple ...edges, multiple targets, and electronic countermeasures (ECM) environments. 20 Distribution, Availability of Abstract 21 Abstract Security...inherently vulnerable to degradation caused by large clutter edges, multiple targets and jamming environments. This thesis presents eight popular and studied

  18. Artificial Epigenetic Networks: Automatic Decomposition of Dynamical Control Tasks Using Topological Self-Modification.

    PubMed

    Turner, Alexander P; Caves, Leo S D; Stepney, Susan; Tyrrell, Andy M; Lones, Michael A

    2017-01-01

    This paper describes the artificial epigenetic network, a recurrent connectionist architecture that is able to dynamically modify its topology in order to automatically decompose and solve dynamical problems. The approach is motivated by the behavior of gene regulatory networks, particularly the epigenetic process of chromatin remodeling that leads to topological change and which underlies the differentiation of cells within complex biological organisms. We expected this approach to be useful in situations where there is a need to switch between different dynamical behaviors, and do so in a sensitive and robust manner in the absence of a priori information about problem structure. This hypothesis was tested using a series of dynamical control tasks, each requiring solutions that could express different dynamical behaviors at different stages within the task. In each case, the addition of topological self-modification was shown to improve the performance and robustness of controllers. We believe this is due to the ability of topological changes to stabilize attractors, promoting stability within a dynamical regime while allowing rapid switching between different regimes. Post hoc analysis of the controllers also demonstrated how the partitioning of the networks could provide new insights into problem structure.

  19. Automatic Non-Destructive Growth Measurement of Leafy Vegetables Based on Kinect

    PubMed Central

    Hu, Yang; Wang, Le; Xiang, Lirong; Wu, Qian; Jiang, Huanyu

    2018-01-01

    Non-destructive plant growth measurement is essential for plant growth and health research. As a 3D sensor, Kinect v2 has huge potentials in agriculture applications, benefited from its low price and strong robustness. The paper proposes a Kinect-based automatic system for non-destructive growth measurement of leafy vegetables. The system used a turntable to acquire multi-view point clouds of the measured plant. Then a series of suitable algorithms were applied to obtain a fine 3D reconstruction for the plant, while measuring the key growth parameters including relative/absolute height, total/projected leaf area and volume. In experiment, 63 pots of lettuce in different growth stages were measured. The result shows that the Kinect-measured height and projected area have fine linear relationship with reference measurements. While the measured total area and volume both follow power law distributions with reference data. All these data have shown good fitting goodness (R2 = 0.9457–0.9914). In the study of biomass correlations, the Kinect-measured volume was found to have a good power law relationship (R2 = 0.9281) with fresh weight. In addition, the system practicality was validated by performance and robustness analysis. PMID:29518958

  20. Automatic segmentation of the left ventricle cavity and myocardium in MRI data.

    PubMed

    Lynch, M; Ghita, O; Whelan, P F

    2006-04-01

    A novel approach for the automatic segmentation has been developed to extract the epi-cardium and endo-cardium boundaries of the left ventricle (lv) of the heart. The developed segmentation scheme takes multi-slice and multi-phase magnetic resonance (MR) images of the heart, transversing the short-axis length from the base to the apex. Each image is taken at one instance in the heart's phase. The images are segmented using a diffusion-based filter followed by an unsupervised clustering technique and the resulting labels are checked to locate the (lv) cavity. From cardiac anatomy, the closest pool of blood to the lv cavity is the right ventricle cavity. The wall between these two blood-pools (interventricular septum) is measured to give an approximate thickness for the myocardium. This value is used when a radial search is performed on a gradient image to find appropriate robust segments of the epi-cardium boundary. The robust edge segments are then joined using a normal spline curve. Experimental results are presented with very encouraging qualitative and quantitative results and a comparison is made against the state-of-the art level-sets method.

  1. Two-dimensional statistical linear discriminant analysis for real-time robust vehicle-type recognition

    NASA Astrophysics Data System (ADS)

    Zafar, I.; Edirisinghe, E. A.; Acar, S.; Bez, H. E.

    2007-02-01

    Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic License Plate Recognition (ALPR) systems. Several car MMR systems have been proposed in literature. However these approaches are based on feature detection algorithms that can perform sub-optimally under adverse lighting and/or occlusion conditions. In this paper we propose a real time, appearance based, car MMR approach using Two Dimensional Linear Discriminant Analysis that is capable of addressing this limitation. We provide experimental results to analyse the proposed algorithm's robustness under varying illumination and occlusions conditions. We have shown that the best performance with the proposed 2D-LDA based car MMR approach is obtained when the eigenvectors of lower significance are ignored. For the given database of 200 car images of 25 different make-model classifications, a best accuracy of 91% was obtained with the 2D-LDA approach. We use a direct Principle Component Analysis (PCA) based approach as a benchmark to compare and contrast the performance of the proposed 2D-LDA approach to car MMR. We conclude that in general the 2D-LDA based algorithm supersedes the performance of the PCA based approach.

  2. Fractional-N phase-locked loop for split and direct automatic frequency control in A-GPS

    NASA Astrophysics Data System (ADS)

    Park, Chester Sungchung; Park, Sungkyung

    2018-07-01

    A low-power mixed-signal phase-locked loop (PLL) is modelled and designed for the DigRF interface between the RF chip and the modem chip. An assisted-GPS or A-GPS multi-standard system includes the DigRF interface and uses the split automatic frequency control (AFC) technique. The PLL circuitry uses the direct AFC technique and is based on the fractional-N architecture using a digital delta-sigma modulator along with a digital counter, fulfilling simple ultra-high-resolution AFC with robust digital circuitry and its timing. Relative to the output frequency, the measured AFC resolution or accuracy is <5 parts per billion (ppb) or on the order of a Hertz. The cycle-to-cycle rms jitter is <6 ps and the typical settling time is <30 μs. A spur reduction technique is adopted and implemented as well, demonstrating spur reduction without employing dithering. The proposed PLL includes a low-leakage phase-frequency detector, a low-drop-out regulator, power-on-reset circuitry and precharge circuitry. The PLL is implemented in a 90-nm CMOS process technology with 1.2 V single supply. The overall PLL draws about 1.1 mA from the supply.

  3. Personal photograph enhancement using internet photo collections.

    PubMed

    Zhang, Chenxi; Gao, Jizhou; Wang, Oliver; Georgel, Pierre; Yang, Ruigang; Davis, James; Frahm, Jan-Michael; Pollefeys, Marc

    2014-02-01

    Given the growth of Internet photo collections, we now have a visual index of all major cities and tourist sites in the world. However, it is still a difficult task to capture that perfect shot with your own camera when visiting these places, especially when your camera itself has limitations, such as a limited field of view. In this paper, we propose a framework to overcome the imperfections of personal photographs of tourist sites using the rich information provided by large-scale Internet photo collections. Our method deploys state-of-the-art techniques for constructing initial 3D models from photo collections. The same techniques are then used to register personal photographs to these models, allowing us to augment personal 2D images with 3D information. This strong available scene prior allows us to address a number of traditionally challenging image enhancement techniques and achieve high-quality results using simple and robust algorithms. Specifically, we demonstrate automatic foreground segmentation, mono-to-stereo conversion, field-of-view expansion, photometric enhancement, and additionally automatic annotation with geolocation and tags. Our method clearly demonstrates some possible benefits of employing the rich information contained in online photo databases to efficiently enhance and augment one's own personal photographs.

  4. Automatic FDG-PET-based tumor and metastatic lymph node segmentation in cervical cancer

    NASA Astrophysics Data System (ADS)

    Arbonès, Dídac R.; Jensen, Henrik G.; Loft, Annika; Munck af Rosenschöld, Per; Hansen, Anders Elias; Igel, Christian; Darkner, Sune

    2014-03-01

    Treatment of cervical cancer, one of the three most commonly diagnosed cancers worldwide, often relies on delineations of the tumour and metastases based on PET imaging using the contrast agent 18F-Fluorodeoxyglucose (FDG). We present a robust automatic algorithm for segmenting the gross tumour volume (GTV) and metastatic lymph nodes in such images. As the cervix is located next to the bladder and FDG is washed out through the urine, the PET-positive GTV and the bladder cannot be easily separated. Our processing pipeline starts with a histogram-based region of interest detection followed by level set segmentation. After that, morphological image operations combined with clustering, region growing, and nearest neighbour labelling allow to remove the bladder and to identify the tumour and metastatic lymph nodes. The proposed method was applied to 125 patients and no failure could be detected by visual inspection. We compared our segmentations with results from manual delineations of corresponding MR and CT images, showing that the detected GTV lays at least 97.5% within the MR/CT delineations. We conclude that the algorithm has a very high potential for substituting the tedious manual delineation of PET positive areas.

  5. ABISM: an interactive image quality assessment tool for adaptive optics instruments

    NASA Astrophysics Data System (ADS)

    Girard, Julien H.; Tourneboeuf, Martin

    2016-07-01

    ABISM (Automatic Background Interactive Strehl Meter) is a interactive tool to evaluate the image quality of astronomical images. It works on seeing-limited point spread functions (PSF) but was developed in particular for diffraction-limited PSF produced by adaptive optics (AO) systems. In the VLT service mode (SM) operations framework, ABISM is designed to help support astronomers or telescope and instruments operators (TIOs) to quickly measure the Strehl ratio (SR) during or right after an observing block (OB) to evaluate whether it meets the requirements/predictions or whether is has to be repeated and will remain in the SM queue. It's a Python-based tool with a graphical user interface (GUI) that can be used with little AO knowledge. The night astronomer (NA) or Telescope and Instrument Operator (TIO) can launch ABISM in one click and the program is able to read keywords from the FITS header to avoid mistakes. A significant effort was also put to make ABISM as robust (and forgiven) with a high rate of repeatability. As a matter of fact, ABISM is able to automatically correct for bad pixels, eliminate stellar neighbours and estimate/fit properly the background, etc.

  6. Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features

    NASA Astrophysics Data System (ADS)

    Li, Jianping; Yang, Bisheng; Chen, Chi; Huang, Ronggang; Dong, Zhen; Xiao, Wen

    2018-02-01

    Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.

  7. Interactive contour delineation and refinement in treatment planning of image‐guided radiation therapy

    PubMed Central

    Zhou, Wu

    2014-01-01

    The accurate contour delineation of the target and/or organs at risk (OAR) is essential in treatment planning for image‐guided radiation therapy (IGRT). Although many automatic contour delineation approaches have been proposed, few of them can fulfill the necessities of applications in terms of accuracy and efficiency. Moreover, clinicians would like to analyze the characteristics of regions of interests (ROI) and adjust contours manually during IGRT. Interactive tool for contour delineation is necessary in such cases. In this work, a novel approach of curve fitting for interactive contour delineation is proposed. It allows users to quickly improve contours by a simple mouse click. Initially, a region which contains interesting object is selected in the image, then the program can automatically select important control points from the region boundary, and the method of Hermite cubic curves is used to fit the control points. Hence, the optimized curve can be revised by moving its control points interactively. Meanwhile, several curve fitting methods are presented for the comparison. Finally, in order to improve the accuracy of contour delineation, the process of the curve refinement based on the maximum gradient magnitude is proposed. All the points on the curve are revised automatically towards the positions with maximum gradient magnitude. Experimental results show that Hermite cubic curves and the curve refinement based on the maximum gradient magnitude possess superior performance on the proposed platform in terms of accuracy, robustness, and time calculation. Experimental results of real medical images demonstrate the efficiency, accuracy, and robustness of the proposed process in clinical applications. PACS number: 87.53.Tf PMID:24423846

  8. Towards automatic SAR-optical stereogrammetry over urban areas using very high resolution imagery

    NASA Astrophysics Data System (ADS)

    Qiu, Chunping; Schmitt, Michael; Zhu, Xiao Xiang

    2018-04-01

    In this paper we discuss the potential and challenges regarding SAR-optical stereogrammetry for urban areas, using very-high-resolution (VHR) remote sensing imagery. Since we do this mainly from a geometrical point of view, we first analyze the height reconstruction accuracy to be expected for different stereogrammetric configurations. Then, we propose a strategy for simultaneous tie point matching and 3D reconstruction, which exploits an epipolar-like search window constraint. To drive the matching and ensure some robustness, we combine different established hand-crafted similarity measures. For the experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR imagery is generally feasible with 3D positioning accuracies in the meter-domain, although the matching of these strongly hetereogeneous multi-sensor data remains very challenging.

  9. Towards automatic SAR-optical stereogrammetry over urban areas using very high resolution imagery.

    PubMed

    Qiu, Chunping; Schmitt, Michael; Zhu, Xiao Xiang

    2018-04-01

    In this paper we discuss the potential and challenges regarding SAR-optical stereogrammetry for urban areas, using very-high-resolution (VHR) remote sensing imagery. Since we do this mainly from a geometrical point of view, we first analyze the height reconstruction accuracy to be expected for different stereogrammetric configurations. Then, we propose a strategy for simultaneous tie point matching and 3D reconstruction, which exploits an epipolar-like search window constraint. To drive the matching and ensure some robustness, we combine different established hand-crafted similarity measures. For the experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR imagery is generally feasible with 3D positioning accuracies in the meter-domain, although the matching of these strongly hetereogeneous multi-sensor data remains very challenging.

  10. A model predictive speed tracking control approach for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Zhu, Min; Chen, Huiyan; Xiong, Guangming

    2017-03-01

    This paper presents a novel speed tracking control approach based on a model predictive control (MPC) framework for autonomous ground vehicles. A switching algorithm without calibration is proposed to determine the drive or brake control. Combined with a simple inverse longitudinal vehicle model and adaptive regulation of MPC, this algorithm can make use of the engine brake torque for various driving conditions and avoid high frequency oscillations automatically. A simplified quadratic program (QP) solving algorithm is used to reduce the computational time, and the approach has been applied in a 16-bit microcontroller. The performance of the proposed approach is evaluated via simulations and vehicle tests, which were carried out in a range of speed-profile tracking tasks. With a well-designed system structure, high-precision speed control is achieved. The system can robustly model uncertainty and external disturbances, and yields a faster response with less overshoot than a PI controller.

  11. Fully automatic method for the determination of fat soluble vitamins and vitamin D metabolites in serum.

    PubMed

    Mata-Granados, J M; Quesada Gómez, J M; Luque de Castro, M D

    2009-05-01

    Fat soluble vitamins and vitamin D metabolites are key compounds in bone metabolism. Unfortunately, variability among 25(OH)D assays limits clinician ability to monitor vitamin D status, supplementation, and toxicity. 0.5 ml serum was mixed with 0.5 ml 60% acetonitrile 150 mM sodium dodecyl sulfate, vortexed for 30 s and injected into an automatic solid-phase extraction (SPE) system for cleanup-preconcentration, then on-line transferred to a reversed-phase analytical column by a 15% methanol-acetonitrile mobile phase at 1.0 ml/min for individual separation of the target analytes. Ultraviolet detection was performed at 265 nm, 325 nm and 292 for vitamin D metabolites, vitamin A and alpha- and delta-tocopherols, respectively. Detection limits were between 0.0015 and 0.26 microg/ml for the target compounds, the precision (expressed as relative standard deviation) between 0.83 and 3.6% for repeatability and between 1.8 and 4.62% for within laboratory reproducibility. Recoveries between 97-100.2% and 95-99% were obtained for low and high concentrations of the target analytes in serum. The total analysis time was 20 min. The on-line coupling of SPE-HPLC endows the proposed method with reliability, robustness, and user unattendance, making it a useful tool for high-throughput analysis in clinical and research laboratories.

  12. An intelligent control scheme for precise tip-motion control in atomic force microscopy.

    PubMed

    Wang, Yanyan; Hu, Xiaodong; Xu, Linyan

    2016-01-01

    The paper proposes a new intelligent control method to precisely control the tip motion of the atomic force microscopy (AFM). The tip moves up and down at a high rate along the z direction during scanning, requiring the utilization of a rapid feedback controller. The standard proportional-integral (PI) feedback controller is commonly used in commercial AFMs to enable topography measurements. The controller's response performance is determined by the set of the proportional (P) parameter and the integral (I) parameter. However, the two parameters cannot be automatically altered simultaneously according to the scanning speed and the surface topography during continuors scanning, leading to an inaccurate measurement. Thus a new intelligent controller combining the fuzzy controller and the PI controller is put forward in the paper. The new controller automatically selects the most appropriate PI parameters to achieve a fast response rate on basis of the tracking errors. In the experimental setup, the new controller is realized with a digital signal process (DSP) system, implemented in a conventional AFM system. Experiments are carried out by comparing the new method with the standard PI controller. The results demonstrate that the new method is more robust and effective for the precise tip motion control, corresponding to the achievement of a highly qualified image by shortening the response time of the controller. © Wiley Periodicals, Inc.

  13. An automatic microseismic or acoustic emission arrival identification scheme with deep recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Zheng, Jing; Lu, Jiren; Peng, Suping; Jiang, Tianqi

    2018-02-01

    The conventional arrival pick-up algorithms cannot avoid the manual modification of the parameters for the simultaneous identification of multiple events under different signal-to-noise ratios (SNRs). Therefore, in order to automatically obtain the arrivals of multiple events with high precision under different SNRs, in this study an algorithm was proposed which had the ability to pick up the arrival of microseismic or acoustic emission events based on deep recurrent neural networks. The arrival identification was performed using two important steps, which included a training phase and a testing phase. The training process was mathematically modelled by deep recurrent neural networks using Long Short-Term Memory architecture. During the testing phase, the learned weights were utilized to identify the arrivals through the microseismic/acoustic emission data sets. The data sets were obtained by rock physics experiments of the acoustic emission. In order to obtain the data sets under different SNRs, this study added random noise to the raw experiments' data sets. The results showed that the outcome of the proposed method was able to attain an above 80 per cent hit-rate at SNR 0 dB, and an approximately 70 per cent hit-rate at SNR -5 dB, with an absolute error in 10 sampling points. These results indicated that the proposed method had high selection precision and robustness.

  14. Effective System for Automatic Bundle Block Adjustment and Ortho Image Generation from Multi Sensor Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Akilan, A.; Nagasubramanian, V.; Chaudhry, A.; Reddy, D. Rajesh; Sudheer Reddy, D.; Usha Devi, R.; Tirupati, T.; Radhadevi, P. V.; Varadan, G.

    2014-11-01

    Block Adjustment is a technique for large area mapping for images obtained from different remote sensingsatellites.The challenge in this process is to handle huge number of satellite imageries from different sources with different resolution and accuracies at the system level. This paper explains a system with various tools and techniques to effectively handle the end-to-end chain in large area mapping and production with good level of automation and the provisions for intuitive analysis of final results in 3D and 2D environment. In addition, the interface for using open source ortho and DEM references viz., ETM, SRTM etc. and displaying ESRI shapes for the image foot-prints are explained. Rigorous theory, mathematical modelling, workflow automation and sophisticated software engineering tools are included to ensure high photogrammetric accuracy and productivity. Major building blocks like Georeferencing, Geo-capturing and Geo-Modelling tools included in the block adjustment solution are explained in this paper. To provide optimal bundle block adjustment solution with high precision results, the system has been optimized in many stages to exploit the full utilization of hardware resources. The robustness of the system is ensured by handling failure in automatic procedure and saving the process state in every stage for subsequent restoration from the point of interruption. The results obtained from various stages of the system are presented in the paper.

  15. The impact of OCR accuracy on automated cancer classification of pathology reports.

    PubMed

    Zuccon, Guido; Nguyen, Anthony N; Bergheim, Anton; Wickman, Sandra; Grayson, Narelle

    2012-01-01

    To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.

  16. [A wavelet-transform-based method for the automatic detection of late-type stars].

    PubMed

    Liu, Zhong-tian; Zhao, Rrui-zhen; Zhao, Yong-heng; Wu, Fu-chao

    2005-07-01

    The LAMOST project, the world largest sky survey project, urgently needs an automatic late-type stars detection system. However, to our knowledge, no effective methods for automatic late-type stars detection have been reported in the literature up to now. The present study work is intended to explore possible ways to deal with this issue. Here, by "late-type stars" we mean those stars with strong molecule absorption bands, including oxygen-rich M, L and T type stars and carbon-rich C stars. Based on experimental results, the authors find that after a wavelet transform with 5 scales on the late-type stars spectra, their frequency spectrum of the transformed coefficient on the 5th scale consistently manifests a unimodal distribution, and the energy of frequency spectrum is largely concentrated on a small neighborhood centered around the unique peak. However, for the spectra of other celestial bodies, the corresponding frequency spectrum is of multimodal and the energy of frequency spectrum is dispersible. Based on such a finding, the authors presented a wavelet-transform-based automatic late-type stars detection method. The proposed method is shown by extensive experiments to be practical and of good robustness.

  17. MIAQuant, a novel system for automatic segmentation, measurement, and localization comparison of different biomarkers from serialized histological slices.

    PubMed

    Casiraghi, Elena; Cossa, Mara; Huber, Veronica; Rivoltini, Licia; Tozzi, Matteo; Villa, Antonello; Vergani, Barbara

    2017-11-02

    In the clinical practice, automatic image analysis methods quickly quantizing histological results by objective and replicable methods are getting more and more necessary and widespread. Despite several commercial software products are available for this task, they are very little flexible, and provided as black boxes without modifiable source code. To overcome the aforementioned problems, we employed the commonly used MATLAB platform to develop an automatic method, MIAQuant, for the analysis of histochemical and immunohistochemical images, stained with various methods and acquired by different tools. It automatically extracts and quantifies markers characterized by various colors and shapes; furthermore, it aligns contiguous tissue slices stained by different markers and overlaps them with differing colors for visual comparison of their localization. Application of MIAQuant for clinical research fields, such as oncology and cardiovascular disease studies, has proven its efficacy, robustness and flexibility with respect to various problems; we highlight that, the flexibility of MIAQuant makes it an important tool to be exploited for basic researches where needs are constantly changing. MIAQuant software and its user manual are freely available for clinical studies, pathological research, and diagnosis.

  18. Robust, Self-Healing Superhydrophobic Fabrics Prepared by One-Step Coating of PDMS and Octadecylamine

    PubMed Central

    Xue, Chao-Hua; Bai, Xue; Jia, Shun-Tian

    2016-01-01

    A robust, self-healing superhydrophobic poly(ethylene terephthalate) (PET) fabric was fabricated by a convenient solution-dipping method using an easily available material system consisting of polydimethylsiloxane and octadecylamine (ODA). The surface roughness was formed by self-roughening of ODA coating on PET fibers without any lithography steps or adding any nanomaterials. The fabric coating was durable to withstand 120 cycles of laundry and 5000 cycles of abrasion without apparently changing the superhydrophobicity. More interestingly, the fabric can restore its super liquid-repellent property by 72 h at room temperature even after 20000 cycles of abrasion. Meanwhile, after being damaged chemically, the fabric can restore its superhydrophobicity automatically in 12 h at room temperature or by a short-time heating treatment. We envision that this simple but effective coating system may lead to the development of robust protective clothing for various applications. PMID:27264995

  19. A robust dataset-agnostic heart disease classifier from Phonocardiogram.

    PubMed

    Banerjee, Rohan; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan; Mandana, K M

    2017-07-01

    Automatic classification of normal and abnormal heart sounds is a popular area of research. However, building a robust algorithm unaffected by signal quality and patient demography is a challenge. In this paper we have analysed a wide list of Phonocardiogram (PCG) features in time and frequency domain along with morphological and statistical features to construct a robust and discriminative feature set for dataset-agnostic classification of normal and cardiac patients. The large and open access database, made available in Physionet 2016 challenge was used for feature selection, internal validation and creation of training models. A second dataset of 41 PCG segments, collected using our in-house smart phone based digital stethoscope from an Indian hospital was used for performance evaluation. Our proposed methodology yielded sensitivity and specificity scores of 0.76 and 0.75 respectively on the test dataset in classifying cardiovascular diseases. The methodology also outperformed three popular prior art approaches, when applied on the same dataset.

  20. Automatic control design procedures for restructurable aircraft control

    NASA Technical Reports Server (NTRS)

    Looze, D. P.; Krolewski, S.; Weiss, J.; Barrett, N.; Eterno, J.

    1985-01-01

    A simple, reliable automatic redesign procedure for restructurable control is discussed. This procedure is based on Linear Quadratic (LQ) design methodologies. It employs a robust control system design for the unfailed aircraft to minimize the effects of failed surfaces and to extend the time available for restructuring the Flight Control System. The procedure uses the LQ design parameters for the unfailed system as a basis for choosing the design parameters of the failed system. This philosophy alloys the engineering trade-offs that were present in the nominal design to the inherited by the restructurable design. In particular, it alloys bandwidth limitations and performance trade-offs to be incorporated in the redesigned system. The procedure also has several other desirable features. It effectively redistributes authority among the available control effectors to maximize the system performance subject to actuator limitations and constraints. It provides a graceful performance degradation as the amount of control authority lessens. When given the parameters of the unfailed aircraft, the automatic redesign procedure reproduces the nominal control system design.

  1. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites.

    PubMed

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-03-08

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.

  2. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    PubMed Central

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  3. Robust Bayesian clustering.

    PubMed

    Archambeau, Cédric; Verleysen, Michel

    2007-01-01

    A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.

  4. Semi automatic indexing of PostScript files using Medical Text Indexer in medical education.

    PubMed

    Mollah, Shamim Ara; Cimino, Christopher

    2007-10-11

    At Albert Einstein College of Medicine a large part of online lecture materials contain PostScript files. As the collection grows it becomes essential to create a digital library to have easy access to relevant sections of the lecture material that is full-text indexed; to create this index it is necessary to extract all the text from the document files that constitute the originals of the lectures. In this study we present a semi automatic indexing method using robust technique for extracting text from PostScript files and National Library of Medicine's Medical Text Indexer (MTI) program for indexing the text. This model can be applied to other medical schools for indexing purposes.

  5. Autoclass: An automatic classification system

    NASA Technical Reports Server (NTRS)

    Stutz, John; Cheeseman, Peter; Hanson, Robin

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.

  6. Automatic Classification of Extensive Aftershock Sequences Using Empirical Matched Field Processing

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Harris, David B.; Kværna, Tormod; Dodge, Douglas A.

    2013-04-01

    The aftershock sequences that follow large earthquakes create considerable problems for data centers attempting to produce comprehensive event bulletins in near real-time. The greatly increased number of events which require processing can overwhelm analyst resources and reduce the capacity for analyzing events of monitoring interest. This exacerbates a potentially reduced detection capability at key stations, due the noise generated by the sequence, and a deterioration in the quality of the fully automatic preliminary event bulletins caused by the difficulty in associating the vast numbers of closely spaced arrivals over the network. Considerable success has been enjoyed by waveform correlation methods for the automatic identification of groups of events belonging to the same geographical source region, facilitating the more time-efficient analysis of event ensembles as opposed to individual events. There are, however, formidable challenges associated with the automation of correlation procedures. The signal generated by a very large earthquake seldom correlates well enough with the signals generated by far smaller aftershocks for a correlation detector to produce statistically significant triggers at the correct times. Correlation between events within clusters of aftershocks is significantly better, although the issues of when and how to initiate new pattern detectors are still being investigated. Empirical Matched Field Processing (EMFP) is a highly promising method for detecting event waveforms suitable as templates for correlation detectors. EMFP is a quasi-frequency-domain technique that calibrates the spatial structure of a wavefront crossing a seismic array in a collection of narrow frequency bands. The amplitude and phase weights that result are applied in a frequency-domain beamforming operation that compensates for scattering and refraction effects not properly modeled by plane-wave beams. It has been demonstrated to outperform waveform correlation as a classifier of ripple-fired mining blasts since the narrowband procedure is insensitive to differences in the source-time functions. For sequences in which the spectral content and time-histories of the signals from the main shock and aftershocks vary greatly, the spatial structure calibrated by EMFP is an invariant that permits reliable detection of events in the specific source region. Examples from the 2005 Kashmir and 2011 Van earthquakes demonstrate how EMFP templates from the main events detect arrivals from the aftershock sequences with high sensitivity and exceptionally low false alarm rates. Classical waveform correlation detectors are demonstrated to fail for these examples. Even arrivals with SNR below unity can produce significant EMFP triggers as the spatial pattern of the incoming wavefront is identified, leading to robust detections at a greater number of stations and potentially more reliable automatic bulletins. False EMFP triggers are readily screened by scanning a space of phase shifts relative to the imposed template. EMFP has the potential to produce a rapid and robust overview of the evolving aftershock sequence such that correlation and subspace detectors can be applied semi-autonomously, with well-chosen parameter specifications, to identify and classify clusters of very closely spaced aftershocks.

  7. Image Hashes as Templates for Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janik, Tadeusz; Jarman, Kenneth D.; Robinson, Sean M.

    2012-07-17

    Imaging systems can provide measurements that confidently assess characteristics of nuclear weapons and dismantled weapon components, and such assessment will be needed in future verification for arms control. Yet imaging is often viewed as too intrusive, raising concern about the ability to protect sensitive information. In particular, the prospect of using image-based templates for verifying the presence or absence of a warhead, or of the declared configuration of fissile material in storage, may be rejected out-of-hand as being too vulnerable to violation of information barrier (IB) principles. Development of a rigorous approach for generating and comparing reduced-information templates from images,more » and assessing the security, sensitivity, and robustness of verification using such templates, are needed to address these concerns. We discuss our efforts to develop such a rigorous approach based on a combination of image-feature extraction and encryption-utilizing hash functions to confirm proffered declarations, providing strong classified data security while maintaining high confidence for verification. The proposed work is focused on developing secure, robust, tamper-sensitive and automatic techniques that may enable the comparison of non-sensitive hashed image data outside an IB. It is rooted in research on so-called perceptual hash functions for image comparison, at the interface of signal/image processing, pattern recognition, cryptography, and information theory. Such perceptual or robust image hashing—which, strictly speaking, is not truly cryptographic hashing—has extensive application in content authentication and information retrieval, database search, and security assurance. Applying and extending the principles of perceptual hashing to imaging for arms control, we propose techniques that are sensitive to altering, forging and tampering of the imaged object yet robust and tolerant to content-preserving image distortions and noise. Ensuring that the information contained in the hashed image data (available out-of-IB) cannot be used to extract sensitive information about the imaged object is of primary concern. Thus the techniques are characterized by high unpredictability to guarantee security. We will present an assessment of the performance of our techniques with respect to security, sensitivity and robustness on the basis of a methodical and mathematically precise framework.« less

  8. Fully automatic left ventricular myocardial strain estimation in 2D short-axis tagged magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Morais, Pedro; Queirós, Sandro; Heyde, Brecht; Engvall, Jan; 'hooge, Jan D.; Vilaça, João L.

    2017-09-01

    Cardiovascular diseases are among the leading causes of death and frequently result in local myocardial dysfunction. Among the numerous imaging modalities available to detect these dysfunctional regions, cardiac deformation imaging through tagged magnetic resonance imaging (t-MRI) has been an attractive approach. Nevertheless, fully automatic analysis of these data sets is still challenging. In this work, we present a fully automatic framework to estimate left ventricular myocardial deformation from t-MRI. This strategy performs automatic myocardial segmentation based on B-spline explicit active surfaces, which are initialized using an annular model. A non-rigid image-registration technique is then used to assess myocardial deformation. Three experiments were set up to validate the proposed framework using a clinical database of 75 patients. First, automatic segmentation accuracy was evaluated by comparing against manual delineations at one specific cardiac phase. The proposed solution showed an average perpendicular distance error of 2.35  ±  1.21 mm and 2.27  ±  1.02 mm for the endo- and epicardium, respectively. Second, starting from either manual or automatic segmentation, myocardial tracking was performed and the resulting strain curves were compared. It is shown that the automatic segmentation adds negligible differences during the strain-estimation stage, corroborating its accuracy. Finally, segmental strain was compared with scar tissue extent determined by delay-enhanced MRI. The results proved that both strain components were able to distinguish between normal and infarct regions. Overall, the proposed framework was shown to be accurate, robust, and attractive for clinical practice, as it overcomes several limitations of a manual analysis.

  9. Numerical Nonlinear Robust Control with Applications to Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    automatically. While optimization and optimal control theory have been widely applied in humanoid robot control, it is not without drawbacks . A blind... drawback of Galerkin-based approaches is the need to successively produce discrete forms, which is difficult to implement in practice. Related...universal function approx- imation ability, these approaches are not without drawbacks . In practice, while a single hidden layer neural network can

  10. USSR Report: Machine Tools and Metalworking Equipment.

    DTIC Science & Technology

    1986-01-23

    between satellite stop and the camshaft of the programer unit. The line has 23 positions including 12 automatic ones. Specification of line Number...technological, processes, automated research, etc.) are as follows.: a monochannel based on a shared trunk line, ring, star and tree (polychannel...line or ring networks based on decentralized control of data exchange between subscribers are very robust. A tree -form network has star structure

  11. Robust parameter design for automatically controlled systems and nanostructure synthesis

    NASA Astrophysics Data System (ADS)

    Dasgupta, Tirthankar

    2007-12-01

    This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.

  12. Individual differences in automatic emotion regulation affect the asymmetry of the LPP component.

    PubMed

    Zhang, Jing; Zhou, Renlai

    2014-01-01

    The main goal of this study was to investigate how automatic emotion regulation altered the hemispheric asymmetry of ERPs elicited by emotion processing. We examined the effect of individual differences in automatic emotion regulation on the late positive potential (LPP) when participants were viewing blocks of positive high arousal, positive low arousal, negative high arousal and negative low arousal pictures from International affect picture system (IAPS). Two participant groups were categorized by the Emotion Regulation-Implicit Association Test which has been used in previous research to identify two groups of participants with automatic emotion control and with automatic emotion express. The main finding was that automatic emotion express group showed a right dominance of the LPP component at posterior electrodes, especially in high arousal conditions. But no right dominance of the LPP component was observed for automatic emotion control group. We also found the group with automatic emotion control showed no differences in the right posterior LPP amplitude between high- and low-arousal emotion conditions, while the participants with automatic emotion express showed larger LPP amplitude in the right posterior in high-arousal conditions compared to low-arousal conditions. This result suggested that AER (Automatic emotion regulation) modulated the hemispheric asymmetry of LPP on posterior electrodes and supported the right hemisphere hypothesis.

  13. Detection of the barium daughter in 136Xe -->136Ba + 2e- by in situ single-molecule fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Nygren, David

    2015-10-01

    To proceed toward effective ``discovery class'' ton-scale detectors in the search for neutrino-less double beta decay, a robust technique for rejection of all radioactivity-induced backgrounds is urgently needed. An efficient technique for detection of the barium daughter in the decay 136Xe -->136Ba + 2e- would provide a long-sought pathway toward this goal. Single-molecule fluorescent imaging appears to offer a new way to detect the barium daughter atom, which emerges naturally in an ionized state in pure xenon. A doubly charged barium ion can initiate a chelation process with a non-fluorescent precursor molecule, leading to a highly fluorescent complex. Repeated photo-excitation of the complex can reveal both presence and location of a single ionized atom with high precision and selectivity. Detection within the active volume of a xenon gas Time Projection Chamber operating at high pressure would be automatic, and with a capability for redundant confirmation.

  14. Removing interference-based effects from the infrared transflectance spectra of thin films on metallic substrates: a fast and wave optics conform solution.

    PubMed

    Mayerhöfer, Thomas G; Pahlow, Susanne; Hübner, Uwe; Popp, Jürgen

    2018-06-25

    A hybrid formalism combining elements from Kramers-Kronig based analyses and dispersion analysis was developed, which allows removing interference-based effects in the infrared spectra of layers on highly reflecting substrates. In order to enable a highly convenient application, the correction procedure is fully automatized and usually requires less than a minute with non-optimized software on a typical office PC. The formalism was tested with both synthetic and experimental spectra of poly(methyl methacrylate) on gold. The results confirmed the usefulness of the formalism: apparent peak ratios as well as the interference fringes in the original spectra were successfully corrected. Accordingly, the introduced formalism makes it possible to use inexpensive and robust highly reflecting substrates for routine infrared spectroscopic investigations of layers or films the thickness of which is limited by the imperative that reflectance absorbance must be smaller than about 1. For thicker films the formalism is still useful, but requires estimates for the optical constants.

  15. Deformable templates guided discriminative models for robust 3D brain MRI segmentation.

    PubMed

    Liu, Cheng-Yi; Iglesias, Juan Eugenio; Tu, Zhuowen

    2013-10-01

    Automatically segmenting anatomical structures from 3D brain MRI images is an important task in neuroimaging. One major challenge is to design and learn effective image models accounting for the large variability in anatomy and data acquisition protocols. A deformable template is a type of generative model that attempts to explicitly match an input image with a template (atlas), and thus, they are robust against global intensity changes. On the other hand, discriminative models combine local image features to capture complex image patterns. In this paper, we propose a robust brain image segmentation algorithm that fuses together deformable templates and informative features. It takes advantage of the adaptation capability of the generative model and the classification power of the discriminative models. The proposed algorithm achieves both robustness and efficiency, and can be used to segment brain MRI images with large anatomical variations. We perform an extensive experimental study on four datasets of T1-weighted brain MRI data from different sources (1,082 MRI scans in total) and observe consistent improvement over the state-of-the-art systems.

  16. A blood pressure monitor with robust noise reduction system under linear cuff inflation and deflation.

    PubMed

    Usuda, Takashi; Kobayashi, Naoki; Takeda, Sunao; Kotake, Yoshifumi

    2010-01-01

    We have developed the non-invasive blood pressure monitor which can measure the blood pressure quickly and robustly. This monitor combines two measurement mode: the linear inflation and the linear deflation. On the inflation mode, we realized a faster measurement with rapid inflation rate. On the deflation mode, we realized a robust noise reduction. When there is neither noise nor arrhythmia, the inflation mode incorporated on this monitor provides precise, quick and comfortable measurement. Once the inflation mode fails to calculate appropriate blood pressure due to body movement or arrhythmia, then the monitor switches automatically to the deflation mode and measure blood pressure by using digital signal processing as wavelet analysis, filter bank, filter combined with FFT and Inverse FFT. The inflation mode succeeded 2440 measurements out of 3099 measurements (79%) in an operating room and a rehabilitation room. The new designed blood pressure monitor provides the fastest measurement for patient with normal circulation and robust measurement for patients with body movement or severe arrhythmia. Also this fast measurement method provides comfortableness for patients.

  17. Robust mislabel logistic regression without modeling mislabel probabilities.

    PubMed

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  18. Estimation of mean response via effective balancing score

    PubMed Central

    Hu, Zonghui; Follmann, Dean A.; Wang, Naisyin

    2015-01-01

    Summary We introduce effective balancing scores for estimation of the mean response under a missing at random mechanism. Unlike conventional balancing scores, the effective balancing scores are constructed via dimension reduction free of model specification. Three types of effective balancing scores are introduced: those that carry the covariate information about the missingness, the response, or both. They lead to consistent estimation with little or no loss in efficiency. Compared to existing estimators, the effective balancing score based estimator relieves the burden of model specification and is the most robust. It is a near-automatic procedure which is most appealing when high dimensional covariates are involved. We investigate both the asymptotic and the numerical properties, and demonstrate the proposed method in a study on Human Immunodeficiency Virus disease. PMID:25797955

  19. Support Vector Machine-Based Endmember Extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippi, Anthony M; Archibald, Richard K

    Introduced in this paper is the utilization of Support Vector Machines (SVMs) to automatically perform endmember extraction from hyperspectral data. The strengths of SVM are exploited to provide a fast and accurate calculated representation of high-dimensional data sets that may consist of multiple distributions. Once this representation is computed, the number of distributions can be determined without prior knowledge. For each distribution, an optimal transform can be determined that preserves informational content while reducing the data dimensionality, and hence, the computational cost. Finally, endmember extraction for the whole data set is accomplished. Results indicate that this Support Vector Machine-Based Endmembermore » Extraction (SVM-BEE) algorithm has the capability of autonomously determining endmembers from multiple clusters with computational speed and accuracy, while maintaining a robust tolerance to noise.« less

  20. Autonomous navigation of structured city roads

    NASA Astrophysics Data System (ADS)

    Aubert, Didier; Kluge, Karl C.; Thorpe, Chuck E.

    1991-03-01

    Autonomous road following is a domain which spans a range of complexity from poorly defined unmarked dirt roads to well defined well marked highly struc-. tured highways. The YARF system (for Yet Another Road Follower) is designed to operate in the middle of this range of complexity driving on urban streets. Our research program has focused on the use of feature- and situation-specific segmentation techniques driven by an explicit model of the appearance and geometry of the road features in the environment. We report results in robust detection of white and yellow painted stripes fitting a road model to detected feature locations to determine vehicle position and local road geometry and automatic location of road features in an initial image. We also describe our planned extensions to include intersection navigation.

  1. An extended Lagrangian method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1993-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. The present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multidimensional discontinuities with a high level of accuracy, similar to that found in 1D problems.

  2. ESTADIUS: A High Motion "One Arcsec" Daytime Attitude Estimation System for Stratospheric Applications

    NASA Astrophysics Data System (ADS)

    Montel, J.; Andre, Y.; Mirc, F.; Etcheto, P.; Evrard, J.; Bray, N.; Saccoccio, M.; Tomasini, L.; Perot, E.

    2015-09-01

    ESTADIUS is an autonomous, accurate and daytime attitude estimation system, for stratospheric balloons that require a high level of attitude measurement and stability. The system has been developed by CNES. ESTADIUS is based on star sensor an pyrometer data fusion within an extended Kalman filter. The star sensor is composed of a 16 MPixels visible-CCD camera and a large aperture camera lens (focal length of 135mm, aperture f/1.8, 10ºx15º field of view or FOV) which provides very accurate stars measurements due to very low pixel angular size. This also allows detecting stars against a bright sky background. The pyrometer is a 0.01º/h performance class Fiber Optic Gyroscope (FOG). The system is adapted to work down to an altitude of ~25km, even under high cinematic conditions. Key elements of ESTADIUS are: daytime conditions use (as well as night time), autonomy (automatic recognition of constellations), high angular rate robustness (a few deg/s thanks to the high performance of attitude propagation), stray-light robustness (thanks to a high performance baffle), high accuracy (<1", 1σ). Four stratospheric qualification flights were very successfully performed in 2010/2011 and 2013/2014 in Kiruna (Sweden) and Timmins (Canada). ESTADIUS will allow long stratospheric flights with a unique attitude estimation system avoiding the restriction of night/day conditions at launch. The first operational flight of ESTADIUS will be in 2015 for the PILOT scientific missions (led by IRAP and CNES in France). Further balloon missions such as CIDRE will use the system ESTADIUS is probably the first autonomous, large FOV, daytime stellar attitude measurement system. This paper details the technical features and in-flight results.

  3. Robust, automatic GPS station velocities and velocity time series

    NASA Astrophysics Data System (ADS)

    Blewitt, G.; Kreemer, C.; Hammond, W. C.

    2014-12-01

    Automation in GPS coordinate time series analysis makes results more objective and reproducible, but not necessarily as robust as the human eye to detect problems. Moreover, it is not a realistic option to manually scan our current load of >20,000 time series per day. This motivates us to find an automatic way to estimate station velocities that is robust to outliers, discontinuities, seasonality, and noise characteristics (e.g., heteroscedasticity). Here we present a non-parametric method based on the Theil-Sen estimator, defined as the median of velocities vij=(xj-xi)/(tj-ti) computed between all pairs (i, j). Theil-Sen estimators produce statistically identical solutions to ordinary least squares for normally distributed data, but they can tolerate up to 29% of data being problematic. To mitigate seasonality, our proposed estimator only uses pairs approximately separated by an integer number of years (N-δt)<(tj-ti )<(N+δt), where δt is chosen to be small enough to capture seasonality, yet large enough to reduce random error. We fix N=1 to maximally protect against discontinuities. In addition to estimating an overall velocity, we also use these pairs to estimate velocity time series. To test our methods, we process real data sets that have already been used with velocities published in the NA12 reference frame. Accuracy can be tested by the scatter of horizontal velocities in the North American plate interior, which is known to be stable to ~0.3 mm/yr. This presents new opportunities for time series interpretation. For example, the pattern of velocity variations at the interannual scale can help separate tectonic from hydrological processes. Without any step detection, velocity estimates prove to be robust for stations affected by the Mw7.2 2010 El Mayor-Cucapah earthquake, and velocity time series show a clear change after the earthquake, without any of the usual parametric constraints, such as relaxation of postseismic velocities to their preseismic values.

  4. An Alternative Flight Software Trigger Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions Using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.; Gay, Robert S.; Stachowiak, Susan J.

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles.

  5. An Alternative Flight Software Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly; Gay, Robert; Stachowiak, Susan

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles

  6. An Alternative Flight Software Trigger Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.; Gay, Robert S.; Stachowiak, Susan J.

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter. In order to increase overall robustness, the vehicle also has an alternate method of triggering the drogue parachute deployment based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this velocity-based trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers excellent performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles.

  7. Improved Feature Matching for Mobile Devices with IMU.

    PubMed

    Masiero, Andrea; Vettore, Antonio

    2016-08-05

    Thanks to the recent diffusion of low-cost high-resolution digital cameras and to the development of mostly automated procedures for image-based 3D reconstruction, the popularity of photogrammetry for environment surveys is constantly increasing in the last years. Automatic feature matching is an important step in order to successfully complete the photogrammetric 3D reconstruction: this step is the fundamental basis for the subsequent estimation of the geometry of the scene. This paper reconsiders the feature matching problem when dealing with smart mobile devices (e.g., when using the standard camera embedded in a smartphone as imaging sensor). More specifically, this paper aims at exploiting the information on camera movements provided by the inertial navigation system (INS) in order to make the feature matching step more robust and, possibly, computationally more efficient. First, a revised version of the affine scale-invariant feature transform (ASIFT) is considered: this version reduces the computational complexity of the original ASIFT, while still ensuring an increase of correct feature matches with respect to the SIFT. Furthermore, a new two-step procedure for the estimation of the essential matrix E (and the camera pose) is proposed in order to increase its estimation robustness and computational efficiency.

  8. Spatio-temporal diffusion of dynamic PET images

    NASA Astrophysics Data System (ADS)

    Tauber, C.; Stute, S.; Chau, M.; Spiteri, P.; Chalon, S.; Guilloteau, D.; Buvat, I.

    2011-10-01

    Positron emission tomography (PET) images are corrupted by noise. This is especially true in dynamic PET imaging where short frames are required to capture the peak of activity concentration after the radiotracer injection. High noise results in a possible bias in quantification, as the compartmental models used to estimate the kinetic parameters are sensitive to noise. This paper describes a new post-reconstruction filter to increase the signal-to-noise ratio in dynamic PET imaging. It consists in a spatio-temporal robust diffusion of the 4D image based on the time activity curve (TAC) in each voxel. It reduces the noise in homogeneous areas while preserving the distinct kinetics in regions of interest corresponding to different underlying physiological processes. Neither anatomical priors nor the kinetic model are required. We propose an automatic selection of the scale parameter involved in the diffusion process based on a robust statistical analysis of the distances between TACs. The method is evaluated using Monte Carlo simulations of brain activity distributions. We demonstrate the usefulness of the method and its superior performance over two other post-reconstruction spatial and temporal filters. Our simulations suggest that the proposed method can be used to significantly increase the signal-to-noise ratio in dynamic PET imaging.

  9. The ALICE data quality monitoring system

    NASA Astrophysics Data System (ADS)

    von Haller, B.; Telesca, A.; Chapeland, S.; Carena, F.; Carena, W.; Chibante Barroso, V.; Costa, F.; Denes, E.; Divià, R.; Fuchs, U.; Simonetti, G.; Soós, C.; Vande Vyvre, P.; ALICE Collaboration

    2011-12-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) is a key element of the Data Acquisition's software chain. It provide shifters with precise and complete information to quickly identify and overcome problems, and as a consequence to ensure acquisition of high quality data. DQM typically involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper describes the final design of ALICE'S DQM framework called AMORE (Automatic MOnitoRing Environment), as well as its latest and coming features like the integration with the offline analysis and reconstruction framework, a better use of multi-core processors by a parallelization effort, and its interface with the eLogBook. The concurrent collection and analysis of data in an online environment requires the framework to be highly efficient, robust and scalable. We will describe what has been implemented to achieve these goals and the procedures we follow to ensure appropriate robustness and performance. We finally review the wide range of usages people make of this framework, from the basic monitoring of a single sub-detector to the most complex ones within the High Level Trigger farm or using the Prompt Reconstruction and we describe the various ways of accessing the monitoring results. We conclude with our experience, before and after the LHC startup, when monitoring the data quality in a challenging environment.

  10. Linking quality indicators to clinical trials: an automated approach

    PubMed Central

    Coiera, Enrico; Choong, Miew Keen; Tsafnat, Guy; Hibbert, Peter; Runciman, William B.

    2017-01-01

    Abstract Objective Quality improvement of health care requires robust measurable indicators to track performance. However identifying which indicators are supported by strong clinical evidence, typically from clinical trials, is often laborious. This study tests a novel method for automatically linking indicators to clinical trial registrations. Design A set of 522 quality of care indicators for 22 common conditions drawn from the CareTrack study were automatically mapped to outcome measures reported in 13 971 trials from ClinicalTrials.gov. Intervention Text mining methods extracted phrases mentioning indicators and outcome phrases, and these were compared using the Levenshtein edit distance ratio to measure similarity. Main Outcome Measure Number of care indicators that mapped to outcome measures in clinical trials. Results While only 13% of the 522 CareTrack indicators were thought to have Level I or II evidence behind them, 353 (68%) could be directly linked to randomized controlled trials. Within these 522, 50 of 70 (71%) Level I and II evidence-based indicators, and 268 of 370 (72%) Level V (consensus-based) indicators could be linked to evidence. Of the indicators known to have evidence behind them, only 5.7% (4 of 70) were mentioned in the trial reports but were missed by our method. Conclusions We automatically linked indicators to clinical trial registrations with high precision. Whilst the majority of quality indicators studied could be directly linked to research evidence, a small portion could not and these require closer scrutiny. It is feasible to support the process of indicator development using automated methods to identify research evidence. PMID:28651340

  11. Gap-free segmentation of vascular networks with automatic image processing pipeline.

    PubMed

    Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas

    2017-03-01

    Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    NASA Astrophysics Data System (ADS)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  13. Data mining spacecraft telemetry: towards generic solutions to automatic health monitoring and status characterisation

    NASA Astrophysics Data System (ADS)

    Royer, P.; De Ridder, J.; Vandenbussche, B.; Regibo, S.; Huygen, R.; De Meester, W.; Evans, D. J.; Martinez, J.; Korte-Stapff, M.

    2016-07-01

    We present the first results of a study aimed at finding new and efficient ways to automatically process spacecraft telemetry for automatic health monitoring. The goal is to reduce the load on the flight control team while extending the "checkability" to the entire telemetry database, and provide efficient, robust and more accurate detection of anomalies in near real time. We present a set of effective methods to (a) detect outliers in the telemetry or in its statistical properties, (b) uncover and visualise special properties of the telemetry and (c) detect new behavior. Our results are structured around two main families of solutions. For parameters visiting a restricted set of signal values, i.e. all status parameters and about one third of all the others, we focus on a transition analysis, exploiting properties of Poincare plots. For parameters with an arbitrarily high number of possible signal values, we describe the statistical properties of the signal via its Kernel Density Estimate. We demonstrate that this allows for a generic and dynamic approach of the soft-limit definition. Thanks to a much more accurate description of the signal and of its time evolution, we are more sensitive and more responsive to outliers than the traditional checks against hard limits. Our methods were validated on two years of Venus Express telemetry. They are generic for assisting in health monitoring of any complex system with large amounts of diagnostic sensor data. Not only spacecraft systems but also present-day astronomical observatories can benefit from them.

  14. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    PubMed Central

    Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-01-01

    Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences. PMID:27668140

  15. MIDAS robust trend estimator for accurate GPS station velocities without step detection.

    PubMed

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes v ij  = ( x j -x i )/( t j -t i ) computed between all data pairs i  >  j . For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  16. Fully automatic and precise data analysis developed for time-of-flight mass spectrometry.

    PubMed

    Meyer, Stefan; Riedo, Andreas; Neuland, Maike B; Tulej, Marek; Wurz, Peter

    2017-09-01

    Scientific objectives of current and future space missions are focused on the investigation of the origin and evolution of the solar system with the particular emphasis on habitability and signatures of past and present life. For in situ measurements of the chemical composition of solid samples on planetary surfaces, the neutral atmospheric gas and the thermal plasma of planetary atmospheres, the application of mass spectrometers making use of time-of-flight mass analysers is a technique widely used. However, such investigations imply measurements with good statistics and, thus, a large amount of data to be analysed. Therefore, faster and especially robust automated data analysis with enhanced accuracy is required. In this contribution, an automatic data analysis software, which allows fast and precise quantitative data analysis of time-of-flight mass spectrometric data, is presented and discussed in detail. A crucial part of this software is a robust and fast peak finding algorithm with a consecutive numerical integration method allowing precise data analysis. We tested our analysis software with data from different time-of-flight mass spectrometers and different measurement campaigns thereof. The quantitative analysis of isotopes, using automatic data analysis, yields results with an accuracy of isotope ratios up to 100 ppm for a signal-to-noise ratio (SNR) of 10 4 . We show that the accuracy of isotope ratios is in fact proportional to SNR -1 . Furthermore, we observe that the accuracy of isotope ratios is inversely proportional to the mass resolution. Additionally, we show that the accuracy of isotope ratios is depending on the sample width T s by T s 0.5 . Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  18. The Space-Time Conservation Element and Solution Element Method: A New High-Resolution and Genuinely Multidimensional Paradigm for Solving Conservation Laws. 1; The Two Dimensional Time Marching Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen

    1998-01-01

    A new high resolution and genuinely multidimensional numerical method for solving conservation laws is being, developed. It was designed to avoid the limitations of the traditional methods. and was built from round zero with extensive physics considerations. Nevertheless, its foundation is mathmatically simple enough that one can build from it a coherent, robust. efficient and accurate numerical framework. Two basic beliefs that set the new method apart from the established methods are at the core of its development. The first belief is that, in order to capture physics more efficiently and realistically, the modeling, focus should be placed on the original integral form of the physical conservation laws, rather than the differential form. The latter form follows from the integral form under the additional assumption that the physical solution is smooth, an assumption that is difficult to realize numerically in a region of rapid chance. such as a boundary layer or a shock. The second belief is that, with proper modeling of the integral and differential forms themselves, the resulting, numerical solution should automatically be consistent with the properties derived front the integral and differential forms, e.g., the jump conditions across a shock and the properties of characteristics. Therefore a much simpler and more robust method can be developed by not using the above derived properties explicitly.

  19. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  20. Improved Robustness and Efficiency for Automatic Visual Site Monitoring

    DTIC Science & Technology

    2009-09-01

    the space of expected poses. To avoid having to compare each test window with the whole training corpus, he builds a template hierarchy by...directions of motion. In a second layer of clustering, it also learns how the low-level clusters co-occur with each other. An infinite mix- ture model is used...implementation. We demonstrate the utility of this detector by modeling scene-level activities with a Hierarchical

  1. Retina Image Vessel Segmentation Using a Hybrid CGLI Level Set Method

    PubMed Central

    Chen, Meizhu; Li, Jichun; Zhang, Encai

    2017-01-01

    As a nonintrusive method, the retina imaging provides us with a better way for the diagnosis of ophthalmologic diseases. Extracting the vessel profile automatically from the retina image is an important step in analyzing retina images. A novel hybrid active contour model is proposed to segment the fundus image automatically in this paper. It combines the signed pressure force function introduced by the Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) model with the local intensity property introduced by the Local Binary fitting (LBF) model to overcome the difficulty of the low contrast in segmentation process. It is more robust to the initial condition than the traditional methods and is easily implemented compared to the supervised vessel extraction methods. Proposed segmentation method was evaluated on two public datasets, DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (Structured Analysis of the Retina) (the average accuracy of 0.9390 with 0.7358 sensitivity and 0.9680 specificity on DRIVE datasets and average accuracy of 0.9409 with 0.7449 sensitivity and 0.9690 specificity on STARE datasets). The experimental results show that our method is effective and our method is also robust to some kinds of pathology images compared with the traditional level set methods. PMID:28840122

  2. Examining the robustness of automated aural classification of active sonar echoes.

    PubMed

    Murphy, Stefan M; Hines, Paul C

    2014-02-01

    Active sonar systems are used to detect underwater man-made objects of interest (targets) that are too quiet to be reliably detected with passive sonar. Performance of active sonar can be degraded by false alarms caused by echoes returned from geological seabed structures (clutter) in shallow regions. To reduce false alarms, a method of distinguishing target echoes from clutter echoes is required. Research has demonstrated that perceptual-based signal features similar to those employed in the human auditory system can be used to automatically discriminate between target and clutter echoes, thereby reducing the number of false alarms and improving sonar performance. An active sonar experiment on the Malta Plateau in the Mediterranean Sea was conducted during the Clutter07 sea trial and repeated during the Clutter09 sea trial. The dataset consists of more than 95,000 pulse-compressed echoes returned from two targets and many geological clutter objects. These echoes were processed using an automatic classifier that quantifies the timbre of each echo using a number of perceptual signal features. Using echoes from 2007, the aural classifier was trained to establish a boundary between targets and clutter in the feature space. Temporal robustness was then investigated by testing the classifier on echoes from the 2009 experiment.

  3. Automatic lung nodule matching for the follow-up in temporal chest CT scans

    NASA Astrophysics Data System (ADS)

    Hong, Helen; Lee, Jeongjin; Shin, Yeong Gil

    2006-03-01

    We propose a fast and robust registration method for matching lung nodules of temporal chest CT scans. Our method is composed of four stages. First, the lungs are extracted from chest CT scans by the automatic segmentation method. Second, the gross translational mismatch is corrected by the optimal cube registration. This initial registration does not require extracting any anatomical landmarks. Third, initial alignment is step by step refined by the iterative surface registration. To evaluate the distance measure between surface boundary points, a 3D distance map is generated by the narrow-band distance propagation, which drives fast and robust convergence to the optimal location. Fourth, nodule correspondences are established by the pairs with the smallest Euclidean distances. The results of pulmonary nodule alignment of twenty patients are reported on a per-center-of mass point basis using the average Euclidean distance (AED) error between corresponding nodules of initial and follow-up scans. The average AED error of twenty patients is significantly reduced to 4.7mm from 30.0mm by our registration. Experimental results show that our registration method aligns the lung nodules much faster than the conventional ones using a distance measure. Accurate and fast result of our method would be more useful for the radiologist's evaluation of pulmonary nodules on chest CT scans.

  4. [Medical image segmentation based on the minimum variation snake model].

    PubMed

    Zhou, Changxiong; Yu, Shenglin

    2007-02-01

    It is difficult for traditional parametric active contour (Snake) model to deal with automatic segmentation of weak edge medical image. After analyzing snake and geometric active contour model, a minimum variation snake model was proposed and successfully applied to weak edge medical image segmentation. This proposed model replaces constant force in the balloon snake model by variable force incorporating foreground and background two regions information. It drives curve to evolve with the criterion of the minimum variation of foreground and background two regions. Experiments and results have proved that the proposed model is robust to initial contours placements and can segment weak edge medical image automatically. Besides, the testing for segmentation on the noise medical image filtered by curvature flow filter, which preserves edge features, shows a significant effect.

  5. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  6. IMNN: Information Maximizing Neural Networks

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  7. A novel architecture of recovered data comparison for high speed clock and data recovery

    NASA Astrophysics Data System (ADS)

    Gao, Susan; Li, Fei; Wang, Zhigong; Cui, Hongliang

    2005-05-01

    A clock and data recovery (CDR) circuit is one of the crucial blocks in high-speed serial link communication systems. The data received in these systems are asynchronous and noisy, requiring that a clock be extracted to allow synchronous operations. Furthermore, the data must be "retimed" so that the jitter accumulated during transmission is removed. This paper presents a novel architecture of CDR, which is very tolerant to long sequences of serial ones or zeros and also robust to occasional long absence of transitions. The design is based on the fact that a basic clock recovery having a clock recovery circuit (CRC) and a data decision circuit separately would generate a high jitter clock when the received non-return-to-zero (NRZ) data with long sequences of ones or zeros. To eliminate this drawback, the proposed architecture incorporates a data circuit decision circuit within the phase-locked loop (PLL) CRC. Other than this, a new phase detector (PD) is also proposed, which was easy to accomplish and robust at high speed. This PD is functional with a random input and automatically turns to disable during both the locked state and long absence of transitions. The voltage-controlled oscillator (VCO) is also designed delicately to suppress the jitter. Due to the high stability, the jitter is highly reduced when the loop is locked. The simulation results of such CDR working at 1.25Gb/s particularly for 1000BASE-X Gigabit Ethernet by using TSMC 0.25μm technology are presented to prove the feasibility of this architecture. One more CDR based on edge detection architecture is also built in the circuit for performance comparisons.

  8. Color normalization for robust evaluation of microscopy images

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2015-09-01

    This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.

  9. Automatic anatomy partitioning of the torso region on CT images by using multiple organ localizations with a group-wise calibration technique

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Morita, Syoichi; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2015-03-01

    This paper describes an automatic approach for anatomy partitioning on three-dimensional (3D) computedtomography (CT) images that divide the human torso into several volume-of-interesting (VOI) images based on anatomical definition. The proposed approach combines several individual detections of organ-location with a groupwise organ-location calibration and correction to achieve an automatic and robust multiple-organ localization task. The essence of the proposed method is to jointly detect the 3D minimum bounding box for each type of organ shown on CT images based on intra-organ-image-textures and inter-organ-spatial-relationship in the anatomy. Machine-learning-based template matching and generalized Hough transform-based point-distribution estimation are used in the detection and calibration processes. We apply this approach to the automatic partitioning of a torso region on CT images, which are divided into 35 VOIs presenting major organ regions and tissues required by routine diagnosis in clinical medicine. A database containing 4,300 patient cases of high-resolution 3D torso CT images is used for training and performance evaluations. We confirmed that the proposed method was successful in target organ localization on more than 95% of CT cases. Only two organs (gallbladder and pancreas) showed a lower success rate: 71 and 78% respectively. In addition, we applied this approach to another database that included 287 patient cases of whole-body CT images scanned for positron emission tomography (PET) studies and used for additional performance evaluation. The experimental results showed that no significant difference between the anatomy partitioning results from those two databases except regarding the spleen. All experimental results showed that the proposed approach was efficient and useful in accomplishing localization tasks for major organs and tissues on CT images scanned using different protocols.

  10. A Numerical Study of Three Moving-Grid Methods for One-Dimensional Partial Differential Equations Which Are Based on the Method of Lines

    NASA Astrophysics Data System (ADS)

    Furzeland, R. M.; Verwer, J. G.; Zegeling, P. A.

    1990-08-01

    In recent years, several sophisticated packages based on the method of lines (MOL) have been developed for the automatic numerical integration of time-dependent problems in partial differential equations (PDEs), notably for problems in one space dimension. These packages greatly benefit from the very successful developments of automatic stiff ordinary differential equation solvers. However, from the PDE point of view, they integrate only in a semiautomatic way in the sense that they automatically adjust the time step sizes, but use just a fixed space grid, chosen a priori, for the entire calculation. For solutions possessing sharp spatial transitions that move, e.g., travelling wave fronts or emerging boundary and interior layers, a grid held fixed for the entire calculation is computationally inefficient, since for a good solution this grid often must contain a very large number of nodes. In such cases methods which attempt automatically to adjust the sizes of both the space and the time steps are likely to be more successful in efficiently resolving critical regions of high spatial and temporal activity. Methods and codes that operate this way belong to the realm of adaptive or moving-grid methods. Following the MOL approach, this paper is devoted to an evaluation and comparison, mainly based on extensive numerical tests, of three moving-grid methods for 1D problems, viz., the finite-element method of Miller and co-workers, the method published by Petzold, and a method based on ideas adopted from Dorfi and Drury. Our examination of these three methods is aimed at assessing which is the most suitable from the point of view of retaining the acknowledged features of reliability, robustness, and efficiency of the conventional MOL approach. Therefore, considerable attention is paid to the temporal performance of the methods.

  11. MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank.

    PubMed

    Mao, Yuqing; Lu, Zhiyong

    2017-04-17

    MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly publications by human indexers. The task is highly important for improving literature retrieval and many other scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly (approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a community-wide shared task called BioASQ was recently organized. We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the highest-ranked MeSH terms via a post-processing module. We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F 1 -score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to process MEDLINE documents on a large scale. We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time frame. http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/ .

  12. Designing and Implementing a Retrospective Earthquake Detection Framework at the U.S. Geological Survey National Earthquake Information Center

    NASA Astrophysics Data System (ADS)

    Patton, J.; Yeck, W.; Benz, H.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center (USGS NEIC) is implementing and integrating new signal detection methods such as subspace correlation, continuous beamforming, multi-band picking and automatic phase identification into near-real-time monitoring operations. Leveraging the additional information from these techniques help the NEIC utilize a large and varied network on local to global scales. The NEIC is developing an ordered, rapid, robust, and decentralized framework for distributing seismic detection data as well as a set of formalized formatting standards. These frameworks and standards enable the NEIC to implement a seismic event detection framework that supports basic tasks, including automatic arrival time picking, social media based event detections, and automatic association of different seismic detection data into seismic earthquake events. In addition, this framework enables retrospective detection processing such as automated S-wave arrival time picking given a detected event, discrimination and classification of detected events by type, back-azimuth and slowness calculations, and ensuring aftershock and induced sequence detection completeness. These processes and infrastructure improve the NEIC's capabilities, accuracy, and speed of response. In addition, this same infrastructure provides an improved and convenient structure to support access to automatic detection data for both research and algorithmic development.

  13. Think the thought, walk the walk - social priming reduces the Stroop effect.

    PubMed

    Goldfarb, Liat; Aisenberg, Daniela; Henik, Avishai

    2011-02-01

    In the Stroop task, participants name the color of the ink that a color word is written in and ignore the meaning of the word. Naming the color of an incongruent color word (e.g., RED printed in blue) is slower than naming the color of a congruent color word (e.g., RED printed in red). This robust effect is known as the Stroop effect and it suggests that the intentional instruction - "do not read the word" - has limited influence on one's behavior, as word reading is being executed via an automatic path. Herein is examined the influence of a non-intentional instruction - "do not read the word" - on the Stroop effect. Social concept priming tends to trigger automatic behavior that is in line with the primed concept. Here participants were primed with the social concept "dyslexia" before performing the Stroop task. Because dyslectic people are perceived as having reading difficulties, the Stroop effect was reduced and even failed to reach significance after the dyslectic person priming. A similar effect was replicated in a further experiment, and overall it suggests that the human cognitive system has more success in decreasing the influence of another automatic process via an automatic path rather than via an intentional path. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. A novel automatic segmentation workflow of axial breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Besbes, Feten; Gargouri, Norhene; Damak, Alima; Sellami, Dorra

    2018-04-01

    In this paper we propose a novel process of a fully automatic breast tissue segmentation which is independent from expert calibration and contrast. The proposed algorithm is composed by two major steps. The first step consists in the detection of breast boundaries. It is based on image content analysis and Moore-Neighbour tracing algorithm. As a processing step, Otsu thresholding and neighbors algorithm are applied. Then, the external area of breast is removed to get an approximated breast region. The second preprocessing step is the delineation of the chest wall which is considered as the lowest cost path linking three key points; These points are located automatically at the breast. They are respectively, the left and right boundary points and the middle upper point placed at the sternum region using statistical method. For the minimum cost path search problem, we resolve it through Dijkstra algorithm. Evaluation results reveal the robustness of our process face to different breast densities, complex forms and challenging cases. In fact, the mean overlap between manual segmentation and automatic segmentation through our method is 96.5%. A comparative study shows that our proposed process is competitive and faster than existing methods. The segmentation of 120 slices with our method is achieved at least in 20.57+/-5.2s.

  15. Presentation of the results of a Bayesian automatic event detection and localization program to human analysts

    NASA Astrophysics Data System (ADS)

    Kushida, N.; Kebede, F.; Feitio, P.; Le Bras, R.

    2016-12-01

    The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing and testing NET-VISA (Arora et al., 2013), a Bayesian automatic event detection and localization program, and evaluating its performance in a realistic operational mode. In our preliminary testing at the CTBTO, NET-VISA shows better performance than its currently operating automatic localization program. However, given CTBTO's role and its international context, a new technology should be introduced cautiously when it replaces a key piece of the automatic processing. We integrated the results of NET-VISA into the Analyst Review Station, extensively used by the analysts so that they can check the accuracy and robustness of the Bayesian approach. We expect the workload of the analysts to be reduced because of the better performance of NET-VISA in finding missed events and getting a more complete set of stations than the current system which has been operating for nearly twenty years. The results of a series of tests indicate that the expectations born from the automatic tests, which show an overall overlap improvement of 11%, meaning that the missed events rate is cut by 42%, hold for the integrated interactive module as well. New events are found by analysts, which qualify for the CTBTO Reviewed Event Bulletin, beyond the ones analyzed through the standard procedures. Arora, N., Russell, S., and Sudderth, E., NET-VISA: Network Processing Vertically Integrated Seismic Analysis, 2013, Bull. Seismol. Soc. Am., 103, 709-729.

  16. State Recognition of High Voltage Isolation Switch Based on Background Difference and Iterative Search

    NASA Astrophysics Data System (ADS)

    Xu, Jiayuan; Yu, Chengtao; Bo, Bin; Xue, Yu; Xu, Changfu; Chaminda, P. R. Dushantha; Hu, Chengbo; Peng, Kai

    2018-03-01

    The automatic recognition of the high voltage isolation switch by remote video monitoring is an effective means to ensure the safety of the personnel and the equipment. The existing methods mainly include two ways: improving monitoring accuracy and adopting target detection technology through equipment transformation. Such a method is often applied to specific scenarios, with limited application scope and high cost. To solve this problem, a high voltage isolation switch state recognition method based on background difference and iterative search is proposed in this paper. The initial position of the switch is detected in real time through the background difference method. When the switch starts to open and close, the target tracking algorithm is used to track the motion trajectory of the switch. The opening and closing state of the switch is determined according to the angle variation of the switch tracking point and the center line. The effectiveness of the method is verified by experiments on different switched video frames of switching states. Compared with the traditional methods, this method is more robust and effective.

  17. Automatic registration of terrestrial point clouds based on panoramic reflectance images and efficient BaySAC

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong

    2013-10-01

    This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.

  18. Perceptual Learning Induces Persistent Attentional Capture by Nonsalient Shapes.

    PubMed

    Qu, Zhe; Hillyard, Steven A; Ding, Yulong

    2017-02-01

    Visual attention can be attracted automatically by salient simple features, but whether and how nonsalient complex stimuli such as shapes may capture attention in humans remains unclear. Here, we present strong electrophysiological evidence that a nonsalient shape presented among similar shapes can provoke a robust and persistent capture of attention as a consequence of extensive training in visual search (VS) for that shape. Strikingly, this attentional capture that followed perceptual learning (PL) was evident even when the trained shape was task-irrelevant, was presented outside the focus of top-down spatial attention, and was undetected by the observer. Moreover, this attentional capture persisted for at least 3-5 months after training had been terminated. This involuntary capture of attention was indexed by electrophysiological recordings of the N2pc component of the event-related brain potential, which was localized to ventral extrastriate visual cortex, and was highly predictive of stimulus-specific improvement in VS ability following PL. These findings provide the first evidence that nonsalient shapes can capture visual attention automatically following PL and challenge the prominent view that detection of feature conjunctions requires top-down focal attention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Automatic CDR Estimation for Early Glaucoma Diagnosis

    PubMed Central

    Sarmiento, A.; Sanchez-Morillo, D.; Jiménez, S.; Alemany, P.

    2017-01-01

    Glaucoma is a degenerative disease that constitutes the second cause of blindness in developed countries. Although it cannot be cured, its progression can be prevented through early diagnosis. In this paper, we propose a new algorithm for automatic glaucoma diagnosis based on retinal colour images. We focus on capturing the inherent colour changes of optic disc (OD) and cup borders by computing several colour derivatives in CIE L∗a∗b∗ colour space with CIE94 colour distance. In addition, we consider spatial information retaining these colour derivatives and the original CIE L∗a∗b∗ values of the pixel and adding other characteristics such as its distance to the OD centre. The proposed strategy is robust due to a simple structure that does not need neither initial segmentation nor removal of the vascular tree or detection of vessel bends. The method has been extensively validated with two datasets (one public and one private), each one comprising 60 images of high variability of appearances. Achieved class-wise-averaged accuracy of 95.02% and 81.19% demonstrates that this automated approach could support physicians in the diagnosis of glaucoma in its early stage, and therefore, it could be seen as an opportunity for developing low-cost solutions for mass screening programs. PMID:29279773

  20. An Automatic and Robust Algorithm of Reestablishment of Digital Dental Occlusion

    PubMed Central

    Chang, Yu-Bing; Xia, James J.; Gateno, Jaime; Xiong, Zixiang; Zhou, Xiaobo; Wong, Stephen T. C.

    2017-01-01

    In the field of craniomaxillofacial (CMF) surgery, surgical planning can be performed on composite 3-D models that are generated by merging a computerized tomography scan with digital dental models. Digital dental models can be generated by scanning the surfaces of plaster dental models or dental impressions with a high-resolution laser scanner. During the planning process, one of the essential steps is to reestablish the dental occlusion. Unfortunately, this task is time-consuming and often inaccurate. This paper presents a new approach to automatically and efficiently reestablish dental occlusion. It includes two steps. The first step is to initially position the models based on dental curves and a point matching technique. The second step is to reposition the models to the final desired occlusion based on iterative surface-based minimum distance mapping with collision constraints. With linearization of rotation matrix, the alignment is modeled by solving quadratic programming. The simulation was completed on 12 sets of digital dental models. Two sets of dental models were partially edentulous, and another two sets have first premolar extractions for orthodontic treatment. Two validation methods were applied to the articulated models. The results show that using our method, the dental models can be successfully articulated with a small degree of deviations from the occlusion achieved with the gold-standard method. PMID:20529735

  1. Personal Photo Enhancement Using Internet Photo Collections.

    PubMed

    Zhang, Chenxi; Gao, Jizhou; Wang, Oliver; Georgel, Pierre; Yang, Ruigang; Davis, James; Frahm, Jan-Michael; Pollefeys, Marc

    2013-04-26

    Given the growth of Internet photo collections we now have a visual index of all major cities and tourist sites in the world. However, it is still a difficult task to capture that perfect shot with your own camera when visiting these places, especially when your camera itself has limitations, such as a limited field of view. In this paper, we propose a framework to overcome the imperfections of personal photos of tourist sites using the rich information provided by large scale Internet photo collections. Our method deploys state-of-the-art techniques for constructing initial 3D models from photo collections. The same techniques are then used to register personal photos to these models, allowing us to augment personal 2D images with 3D information. This strong available scene prior allows us to address a number of traditionally challenging image enhancement techniques, and achieve high quality results using simple and robust algorithms. Specifically, we demonstrate automatic foreground segmentation, mono-to-stereo conversion, the field of view expansion, photometric enhancement, and additionally automatic annotation with geo-location and tags. Our method clearly demonstrates some possible benefits of employing the rich information contained in on-line photo databases to efficiently enhance and augment one’s own personal photos.

  2. Optimized spatio-temporal descriptors for real-time fall detection: comparison of support vector machine and Adaboost-based classification

    NASA Astrophysics Data System (ADS)

    Charfi, Imen; Miteran, Johel; Dubois, Julien; Atri, Mohamed; Tourki, Rached

    2013-10-01

    We propose a supervised approach to detect falls in a home environment using an optimized descriptor adapted to real-time tasks. We introduce a realistic dataset of 222 videos, a new metric allowing evaluation of fall detection performance in a video stream, and an automatically optimized set of spatio-temporal descriptors which fed a supervised classifier. We build the initial spatio-temporal descriptor named STHF using several combinations of transformations of geometrical features (height and width of human body bounding box, the user's trajectory with her/his orientation, projection histograms, and moments of orders 0, 1, and 2). We study the combinations of usual transformations of the features (Fourier transform, wavelet transform, first and second derivatives), and we show experimentally that it is possible to achieve high performance using support vector machine and Adaboost classifiers. Automatic feature selection allows to show that the best tradeoff between classification performance and processing time is obtained by combining the original low-level features with their first derivative. Hence, we evaluate the robustness of the fall detection regarding location changes. We propose a realistic and pragmatic protocol that enables performance to be improved by updating the training in the current location with normal activities records.

  3. Automatic RST-based system for a rapid detection of man-made disasters

    NASA Astrophysics Data System (ADS)

    Tramutoli, Valerio; Corrado, Rosita; Filizzola, Carolina; Livia Grimaldi, Caterina Sara; Mazzeo, Giuseppe; Marchese, Francesco; Pergola, Nicola

    2010-05-01

    Man-made disasters may cause injuries to citizens and damages to critical infrastructures. When it is not possible to prevent or foresee such disasters it is hoped at least to rapidly detect the accident in order to intervene as soon as possible to minimize damages. In this context, the combination of a Robust Satellite Technique (RST), able to identify for sure actual (i.e. no false alarm) accidents, and satellite sensors with high temporal resolution seems to assure both a reliable and a timely detection of abrupt Thermal Infrared (TIR) transients related to dangerous explosions. A processing chain, based on the RST approach, has been developed in the framework of the GMOSS and G-MOSAIC projects by DIFA-UNIBAS team, suitable for automatically identify on MSG-SEVIRI images harmful events. Maps of thermal anomalies are generated every 15 minutes (i.e. SEVIRI temporal repetition rate) over a selected area together with kml files (containing information on latitude and longitude of "thermally" anomalous SEVIRI pixel centre, time of image acquisition, relative intensity of anomalies, etc.) for a rapid visualization of the accident position even on Google Earth. Results achieved in the cases of gas pipelines recently exploded or attacked in Russia and in Iraq will be presented in this work.

  4. Estimating psycho-physiological state of a human by speech analysis

    NASA Astrophysics Data System (ADS)

    Ronzhin, A. L.

    2005-05-01

    Adverse effects of intoxication, fatigue and boredom could degrade performance of highly trained operators of complex technical systems with potentially catastrophic consequences. Existing physiological fitness for duty tests are time consuming, costly, invasive, and highly unpopular. Known non-physiological tests constitute a secondary task and interfere with the busy workload of the tested operator. Various attempts to assess the current status of the operator by processing of "normal operational data" often lead to excessive amount of computations, poorly justified metrics, and ambiguity of results. At the same time, speech analysis presents a natural, non-invasive approach based upon well-established efficient data processing. In addition, it supports both behavioral and physiological biometric. This paper presents an approach facilitating robust speech analysis/understanding process in spite of natural speech variability and background noise. Automatic speech recognition is suggested as a technique for the detection of changes in the psycho-physiological state of a human that typically manifest themselves by changes of characteristics of voice tract and semantic-syntactic connectivity of conversation. Preliminary tests have confirmed that the statistically significant correlation between the error rate of automatic speech recognition and the extent of alcohol intoxication does exist. In addition, the obtained data allowed exploring some interesting correlations and establishing some quantitative models. It is proposed to utilize this approach as a part of fitness for duty test and compare its efficiency with analyses of iris, face geometry, thermography and other popular non-invasive biometric techniques.

  5. Modified automatic R-peak detection algorithm for patients with epilepsy using a portable electrocardiogram recorder.

    PubMed

    Jeppesen, J; Beniczky, S; Fuglsang Frederiksen, A; Sidenius, P; Johansen, P

    2017-07-01

    Earlier studies have shown that short term heart rate variability (HRV) analysis of ECG seems promising for detection of epileptic seizures. A precise and accurate automatic R-peak detection algorithm is a necessity in a real-time, continuous measurement of HRV, in a portable ECG device. We used the portable CE marked ePatch® heart monitor to record the ECG of 14 patients, who were enrolled in the videoEEG long term monitoring unit for clinical workup of epilepsy. Recordings of the first 7 patients were used as training set of data for the R-peak detection algorithm and the recordings of the last 7 patients (467.6 recording hours) were used to test the performance of the algorithm. We aimed to modify an existing QRS-detection algorithm to a more precise R-peak detection algorithm to avoid the possible jitter Qand S-peaks can create in the tachogram, which causes error in short-term HRVanalysis. The proposed R-peak detection algorithm showed a high sensitivity (Se = 99.979%) and positive predictive value (P+ = 99.976%), which was comparable with a previously published QRS-detection algorithm for the ePatch® ECG device, when testing the same dataset. The novel R-peak detection algorithm designed to avoid jitter has very high sensitivity and specificity and thus is a suitable tool for a robust, fast, real-time HRV-analysis in patients with epilepsy, creating the possibility for real-time seizure detection for these patients.

  6. User-guided segmentation for volumetric retinal optical coherence tomography images

    PubMed Central

    Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.

    2014-01-01

    Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962

  7. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  8. A Complete System for Automatic Extraction of Left Ventricular Myocardium From CT Images Using Shape Segmentation and Contour Evolution

    PubMed Central

    Zhu, Liangjia; Gao, Yi; Appia, Vikram; Yezzi, Anthony; Arepalli, Chesnal; Faber, Tracy; Stillman, Arthur; Tannenbaum, Allen

    2014-01-01

    The left ventricular myocardium plays a key role in the entire circulation system and an automatic delineation of the myocardium is a prerequisite for most of the subsequent functional analysis. In this paper, we present a complete system for an automatic segmentation of the left ventricular myocardium from cardiac computed tomography (CT) images using the shape information from images to be segmented. The system follows a coarse-to-fine strategy by first localizing the left ventricle and then deforming the myocardial surfaces of the left ventricle to refine the segmentation. In particular, the blood pool of a CT image is extracted and represented as a triangulated surface. Then, the left ventricle is localized as a salient component on this surface using geometric and anatomical characteristics. After that, the myocardial surfaces are initialized from the localization result and evolved by applying forces from the image intensities with a constraint based on the initial myocardial surface locations. The proposed framework has been validated on 34-human and 12-pig CT images, and the robustness and accuracy are demonstrated. PMID:24723531

  9. Automatic neutron dosimetry system based on fluorescent nuclear track detector technology.

    PubMed

    Akselrod, M S; Fomenko, V V; Bartz, J A; Haslett, T L

    2014-10-01

    For the first time, the authors are describing an automatic fluorescent nuclear track detector (FNTD) reader for neutron dosimetry. FNTD is a luminescent integrating type of detector made of aluminium oxide crystals that does not require electronics or batteries during irradiation. Non-destructive optical readout of the detector is performed using a confocal laser scanning fluorescence imaging with near-diffraction limited resolution. The fully automatic table-top reader allows one to load up to 216 detectors on a tray, read their engraved IDs using a CCD camera and optical character recognition, scan and process simultaneously two types of images in fluorescent and reflected laser light contrast to eliminate false-positive tracks related to surface and volume crystal imperfections. The FNTD dosimetry system allows one to measure neutron doses from 0.1 mSv to 20 Sv and covers neutron energies from thermal to 20 MeV. The reader is characterised by a robust, compact optical design, fast data processing electronics and user-friendly software. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. User-guided segmentation for volumetric retinal optical coherence tomography images.

    PubMed

    Yin, Xin; Chao, Jennifer R; Wang, Ruikang K

    2014-08-01

    Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method.

  11. Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publiclymore » available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.« less

  12. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

    PubMed Central

    Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.

    2009-01-01

    In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083

  13. A Critical Review of the Literature on Attentional Bias in Cocaine Use Disorder and Suggestions for Future Research

    PubMed Central

    Leeman, Robert F.; Robinson, Cendrine D.; Waters, Andrew J.; Sofuoglu, Mehmet

    2014-01-01

    Cocaine use disorder (CUD) continues to be an important public health problem and novel approaches are needed to improve the effectiveness of treatments for CUD. Recently, there has been increased interest in the role of automatic cognition such as attentional bias (AB) in addictive behaviors and AB has been proposed to be a cognitive marker for addictions. Automatic cognition may be particularly relevant to CUD as there is evidence for particularly robust AB to cocaine cues and strong relationships to craving for cocaine and other illicit drugs. Further, the wide-ranging cognitive deficits (e.g., in response inhibition and working memory) evinced by many cocaine users enhance the potential importance of interventions targeting automatic cognition in this population. In the current paper, we discuss relevant addiction theories, followed by a review of studies that examined AB in CUD. We then consider the neural substrates of attentional bias including human neuroimaging, neurobiological and pharmacological studies. We conclude with a discussion of research gaps and future directions for attentional bias in CUD. PMID:25222545

  14. Automatic classification of canine PRG neuronal discharge patterns using K-means clustering.

    PubMed

    Zuperku, Edward J; Prkic, Ivana; Stucke, Astrid G; Miller, Justin R; Hopp, Francis A; Stuth, Eckehard A

    2015-02-01

    Respiratory-related neurons in the parabrachial-Kölliker-Fuse (PB-KF) region of the pons play a key role in the control of breathing. The neuronal activities of these pontine respiratory group (PRG) neurons exhibit a variety of inspiratory (I), expiratory (E), phase spanning and non-respiratory related (NRM) discharge patterns. Due to the variety of patterns, it can be difficult to classify them into distinct subgroups according to their discharge contours. This report presents a method that automatically classifies neurons according to their discharge patterns and derives an average subgroup contour of each class. It is based on the K-means clustering technique and it is implemented via SigmaPlot User-Defined transform scripts. The discharge patterns of 135 canine PRG neurons were classified into seven distinct subgroups. Additional methods for choosing the optimal number of clusters are described. Analysis of the results suggests that the K-means clustering method offers a robust objective means of both automatically categorizing neuron patterns and establishing the underlying archetypical contours of subtypes based on the discharge patterns of group of neurons. Published by Elsevier B.V.

  15. Automatic Marker-free Longitudinal Infrared Image Registration by Shape Context Based Matching and Competitive Winner-guided Optimal Corresponding

    PubMed Central

    Lee, Chia-Yen; Wang, Hao-Jen; Lai, Jhih-Hao; Chang, Yeun-Chung; Huang, Chiun-Sheng

    2017-01-01

    Long-term comparisons of infrared image can facilitate the assessment of breast cancer tissue growth and early tumor detection, in which longitudinal infrared image registration is a necessary step. However, it is hard to keep markers attached on a body surface for weeks, and rather difficult to detect anatomic fiducial markers and match them in the infrared image during registration process. The proposed study, automatic longitudinal infrared registration algorithm, develops an automatic vascular intersection detection method and establishes feature descriptors by shape context to achieve robust matching, as well as to obtain control points for the deformation model. In addition, competitive winner-guided mechanism is developed for optimal corresponding. The proposed algorithm is evaluated in two ways. Results show that the algorithm can quickly lead to accurate image registration and that the effectiveness is superior to manual registration with a mean error being 0.91 pixels. These findings demonstrate that the proposed registration algorithm is reasonably accurate and provide a novel method of extracting a greater amount of useful data from infrared images. PMID:28145474

  16. Spectral saliency via automatic adaptive amplitude spectrum analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Dai, Jialun; Zhu, Yafei; Zheng, Haiyong; Qiao, Xiaoyan

    2016-03-01

    Suppressing nonsalient patterns by smoothing the amplitude spectrum at an appropriate scale has been shown to effectively detect the visual saliency in the frequency domain. Different filter scales are required for different types of salient objects. We observe that the optimal scale for smoothing amplitude spectrum shares a specific relation with the size of the salient region. Based on this observation and the bottom-up saliency detection characterized by spectrum scale-space analysis for natural images, we propose to detect visual saliency, especially with salient objects of different sizes and locations via automatic adaptive amplitude spectrum analysis. We not only provide a new criterion for automatic optimal scale selection but also reserve the saliency maps corresponding to different salient objects with meaningful saliency information by adaptive weighted combination. The performance of quantitative and qualitative comparisons is evaluated by three different kinds of metrics on the four most widely used datasets and one up-to-date large-scale dataset. The experimental results validate that our method outperforms the existing state-of-the-art saliency models for predicting human eye fixations in terms of accuracy and robustness.

  17. Robust augmented reality registration method for localization of solid organs' tumors using CT-derived virtual biomechanical model and fluorescent fiducials.

    PubMed

    Kong, Seong-Ho; Haouchine, Nazim; Soares, Renato; Klymchenko, Andrey; Andreiuk, Bohdan; Marques, Bruno; Shabat, Galyna; Piechaud, Thierry; Diana, Michele; Cotin, Stéphane; Marescaux, Jacques

    2017-07-01

    Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. Our preliminary experiments showed the potential of a biomechanical model with fluorescent fiducials to propagate the deformation of solid organs' surface to their inner structures including tumors with good accuracy and automatized robust tracking.

  18. Automatic Localization of Vertebral Levels in X-Ray Fluoroscopy Using 3D-2D Registration: A Tool to Reduce Wrong-Site Surgery

    PubMed Central

    Otake, Y.; Schafer, S.; Stayman, J. W.; Zbijewski, W.; Kleinszig, G.; Graumann, R.; Khanna, A. J.; Siewerdsen, J. H.

    2012-01-01

    Surgical targeting of the incorrect vertebral level (“wrong-level” surgery) is among the more common wrong-site surgical errors, attributed primarily to a lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. Conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error, and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (viz., CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved 10 patient CT datasets from which 50,000 simulated fluoroscopic images were generated from C-arm poses selected to approximate C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (viz., mPD < 5mm). Simulation studies showed a success rate of 99.998% (1 failure in 50,000 trials) and computation time of 4.7 sec on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific case of vertebral labeling, since any structure defined in pre-operative (or intra-operative) CT or cone-beam CT can be automatically registered to the fluoroscopic scene. PMID:22864366

  19. The OPEnSampler: A Low-Cost, Low-Weight, Customizable and Modular Open Source 24-Unit Automatic Water Sampler

    NASA Astrophysics Data System (ADS)

    Nelke, M.; Selker, J. S.; Udell, C.

    2017-12-01

    Reliable automatic water samplers allow repetitive sampling of various water sources over long periods of time without requiring a researcher on site, reducing human error as well as the monetary and time costs of traveling to the field, particularly when the scale of the sample period is hours or days. The high fixed cost of buying a commercial sampler with little customizability can be a barrier to research requiring repetitive samples, such as the analysis of septic water pre- and post-treatment. DIY automatic samplers proposed in the past sacrifice maximum volume, customizability, or scope of applications, among other features, in exchange for a lower net cost. The purpose of this project was to develop a low-cost, highly customizable, robust water sampler that is capable of sampling many sources of water for various analytes. A lightweight aluminum-extrusion frame was designed and assembled, chosen for its mounting system, strength, and low cost. Water is drawn from two peristaltic pumps through silicone tubing and directed into 24 foil-lined 250mL bags using solenoid valves. A programmable Arduino Uno microcontroller connected to a circuit board communicates with a battery operated real-time clock, initiating sampling stages. Period and volume settings are programmable in-field by the user via serial commands. The OPEnSampler is an open design, allowing the user to decide what components to use and the modular theme of the frame allows fast mounting of new manufactured or 3D printed components. The 24-bag system weighs less than 10kg and the material cost is under $450. Up to 6L of sample water can be drawn at a rate of 100mL/minute in either direction. Faster flowrates are achieved by using more powerful peristaltic pumps. Future design changes could allow a greater maximum volume by filling the unused space with more containers and adding GSM communications to send real time status information.

  20. CNN universal machine as classificaton platform: an art-like clustering algorithm.

    PubMed

    Bálya, David

    2003-12-01

    Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.

  1. Automatic detection of left and right ventricles from CTA enables efficient alignment of anatomy with myocardial perfusion data.

    PubMed

    Piccinelli, Marina; Faber, Tracy L; Arepalli, Chesnal D; Appia, Vikram; Vinten-Johansen, Jakob; Schmarkey, Susan L; Folks, Russell D; Garcia, Ernest V; Yezzi, Anthony

    2014-02-01

    Accurate alignment between cardiac CT angiographic studies (CTA) and nuclear perfusion images is crucial for improved diagnosis of coronary artery disease. This study evaluated in an animal model the accuracy of a CTA fully automated biventricular segmentation algorithm, a necessary step for automatic and thus efficient PET/CT alignment. Twelve pigs with acute infarcts were imaged using Rb-82 PET and 64-slice CTA. Post-mortem myocardium mass measurements were obtained. Endocardial and epicardial myocardial boundaries were manually and automatically detected on the CTA and both segmentations used to perform PET/CT alignment. To assess the segmentation performance, image-based myocardial masses were compared to experimental data; the hand-traced profiles were used as a reference standard to assess the global and slice-by-slice robustness of the automated algorithm in extracting myocardium, LV, and RV. Mean distances between the automated and the manual 3D segmented surfaces were computed. Finally, differences in rotations and translations between the manual and automatic surfaces were estimated post-PET/CT alignment. The largest, smallest, and median distances between interactive and automatic surfaces averaged 1.2 ± 2.1, 0.2 ± 1.6, and 0.7 ± 1.9 mm. The average angular and translational differences in CT/PET alignments were 0.4°, -0.6°, and -2.3° about x, y, and z axes, and 1.8, -2.1, and 2.0 mm in x, y, and z directions. Our automatic myocardial boundary detection algorithm creates surfaces from CTA that are similar in accuracy and provide similar alignments with PET as those obtained from interactive tracing. Specific difficulties in a reliable segmentation of the apex and base regions will require further improvements in the automated technique.

  2. Ultra-Wideband EMI Sensing: Non-Metallic Target Detection and Automatic Classification of Unexploded Ordnance

    NASA Astrophysics Data System (ADS)

    Sigman, John Brevard

    Buried explosive hazards present a pressing problem worldwide. Millions of acres and thousands of sites are contaminated in the United States alone [1, 2]. There are three categories of explosive hazards: metallic, intermediate-electrical conducting (IEC), and non-conducting targets. Metallic target detection and classification by electromagnetic (EM) signature has been the subject of research for many years. Key to the success of this research is modern multi-static Electromagnetic Induction (EMI) sensors, which are able to measure the wideband EMI response from metallic buried targets. However, no hardware solutions exist which can characterize IEC and non-conducting targets. While high-conducting metallic targets exhibit a quadrature peak response for frequencies in a traditional EMI regime under 100 kHz, the response of intermediate-conducting objects manifests at higher frequencies, between 100 kHz and 15 MHz. In addition to high-quality electromagnetic sensor data and robust electromagnetic models, a classification procedure is required to discriminate Targets of Interest (TOI) from clutter. Currently, costly human experts are used for this task. This expense and effort can be spared by using statistical signal processing and machine learning. This thesis has two main parts. In the first part, we explore using the high frequency EMI (HFEMI) band (100 kHz-15 MHz) for detection of carbon fiber UXO, voids, and of materials with characteristics that may be associated with improvised explosive devices (IED). We constructed an HFEMI sensing instrument, and apply the techniques of metal detection to sensing in a band of frequencies which are the transition between the induction and radar bands. In this transition domain, physical considerations and technological issues arise that cannot be solved via the approaches used in either of the bracketing lower and higher frequency ranges. In the second half of this thesis, we present a procedure for automatic classification of UXO. For maximum generality, our algorithm is robust and can handle sparse training examples of multi-class data. This procedure uses an unsupervised starter, semi-supervised techniques to gather training data, and concludes with supervised learning until all TOI are found. Additionally, an inference method for estimating the number of remaining true positives from a partial Receiver Operating Characteristic (ROC) curve is presented and applied to live-site dig histories.

  3. Potential of dynamically harmonized Fourier transform ion cyclotron resonance cell for high-throughput metabolomics fingerprinting: control of data quality.

    PubMed

    Habchi, Baninia; Alves, Sandra; Jouan-Rimbaud Bouveresse, Delphine; Appenzeller, Brice; Paris, Alain; Rutledge, Douglas N; Rathahao-Paris, Estelle

    2018-01-01

    Due to the presence of pollutants in the environment and food, the assessment of human exposure is required. This necessitates high-throughput approaches enabling large-scale analysis and, as a consequence, the use of high-performance analytical instruments to obtain highly informative metabolomic profiles. In this study, direct introduction mass spectrometry (DIMS) was performed using a Fourier transform ion cyclotron resonance (FT-ICR) instrument equipped with a dynamically harmonized cell. Data quality was evaluated based on mass resolving power (RP), mass measurement accuracy, and ion intensity drifts from the repeated injections of quality control sample (QC) along the analytical process. The large DIMS data size entails the use of bioinformatic tools for the automatic selection of common ions found in all QC injections and for robustness assessment and correction of eventual technical drifts. RP values greater than 10 6 and mass measurement accuracy of lower than 1 ppm were obtained using broadband mode resulting in the detection of isotopic fine structure. Hence, a very accurate relative isotopic mass defect (RΔm) value was calculated. This reduces significantly the number of elemental composition (EC) candidates and greatly improves compound annotation. A very satisfactory estimate of repeatability of both peak intensity and mass measurement was demonstrated. Although, a non negligible ion intensity drift was observed for negative ion mode data, a normalization procedure was easily applied to correct this phenomenon. This study illustrates the performance and robustness of the dynamically harmonized FT-ICR cell to perform large-scale high-throughput metabolomic analyses in routine conditions. Graphical abstract Analytical performance of FT-ICR instrument equipped with a dynamically harmonized cell.

  4. An intergrated image matching algorithm and its application in the production of lunar map based on Chang'E-2 images

    NASA Astrophysics Data System (ADS)

    Wang, F.; Ren, X.; Liu, J.; Li, C.

    2012-12-01

    An accurate topographic map is a requisite for nearly every phase of research on lunar surface, as well as an essential tool for spacecraft mission planning and operating. Automatic image matching is a key component in this process that could ensure both quality and efficiency in the production of digital topographic map for the whole lunar coverage. It also provides the basis for lunar photographic surveying block adjustment. Image matching is relatively easy when encountered with good image texture conditions. However, on lunar images with characteristics such as constantly changing lighting conditions, large rotation angle, few or homogeneous texture and low image contrasts, it becomes a difficult and challenging job. Thus, we require a robust algorithm that is capable of dealing with light effect and image deformation to fulfill this task. In order to obtain a comprehensive review of currently dominated feature point extraction operators and test whether they are suitable for lunar images, we applied several operators, such as Harris, Forstner, Moravec, SIFT, to images from Chang'E-2 spacecraft. We found that SITF (Scale Invariant Feature Transform) is a scale invariant interest point detector that can provide robustness against errors caused by image distortions from scale, orientation or illumination condition changes. Meanwhile, its capability in detecting blob-like interest points satisfies the image characteristics of Chang'E-2. However, the uneven distributed and low accurate matching results cannot meet the practical requirements in lunar photogrammetry. In contrast, some high-precision corner detectors, such as Harris, Forstner, Moravec, are limited in their sensitivities to geometric rotation. Therefore, this paper proposed a least square matching algorithm that combines the advantages of both local feature detector and corner detector. We experiment this novel method in several sites. The accuracy assessment shows that the overall matching error is within 0.3 pixel and the matching reliability can reach 98%, which proves its robustness. This method had been successfully applied to over 700 scenes of lunar images that cover the entire moon, in finding corresponding pixels in a pair of images from adjacent tracks and aiding the automatic lunar image mosaicing. The completion of the 7 meter resolution lunar map shows the promise of this least square matching algorithm in applications with a large quantity of images to be processed.

  5. Return Difference Feedback Design for Robust Uncertainty Tolerance in Stochastic Multivariable Control Systems.

    DTIC Science & Technology

    1982-11-01

    D- R136 495 RETURN DIFFERENCE FEEDBACK DESIGN FOR ROBUSTj/ UNCERTAINTY TOLERANCE IN STO..(U) UNIVERSITY OF SOUTHERN CALIFORNIA LOS ANGELES DEPT OF...State and ZIP Code) 7. b6 ADORESS (City. Staft and ZIP Code) Department of Electrical Engineering -’M Directorate of Mathematical & Information Systems ...13. SUBJECT TERMS Continur on rverse ineeesaty and identify by block nmber) FIELD GROUP SUE. GR. Systems theory; control; feedback; automatic control

  6. The Rotated Speeded-Up Robust Features Algorithm (R-SURF)

    DTIC Science & Technology

    2014-06-01

    blue color model YUV one luminance two chrominance color model xviii THIS PAGE INTENTIONALLY LEFT BLANK xix EXECUTIVE SUMMARY Automatic...256 256 3  color scheme with an uncompressed image is used, each visual pixel has a possibility of 3256 combinations 2 [5]. There are...Portugal, 2009. [41] J. Sivic and A. Zisserman, “Efficient visual search of videos cast as text retrieval,” IEEE Transactions on Pattern Analysis and

  7. Turbomachinery Airfoil Design Optimization Using Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.

  8. Phase-unwrapping algorithm by a rounding-least-squares approach

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  9. An image processing approach to analyze morphological features of microscopic images of muscle fibers.

    PubMed

    Comin, Cesar Henrique; Xu, Xiaoyin; Wang, Yaming; Costa, Luciano da Fontoura; Yang, Zhong

    2014-12-01

    We present an image processing approach to automatically analyze duo-channel microscopic images of muscular fiber nuclei and cytoplasm. Nuclei and cytoplasm play a critical role in determining the health and functioning of muscular fibers as changes of nuclei and cytoplasm manifest in many diseases such as muscular dystrophy and hypertrophy. Quantitative evaluation of muscle fiber nuclei and cytoplasm thus is of great importance to researchers in musculoskeletal studies. The proposed computational approach consists of steps of image processing to segment and delineate cytoplasm and identify nuclei in two-channel images. Morphological operations like skeletonization is applied to extract the length of cytoplasm for quantification. We tested the approach on real images and found that it can achieve high accuracy, objectivity, and robustness. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. APPHi: Automated Photometry Pipeline for High Cadence Large Volume Data

    NASA Astrophysics Data System (ADS)

    Sánchez, E.; Castro, J.; Silva, J.; Hernández, J.; Reyes, M.; Hernández, B.; Alvarez, F.; García T.

    2018-04-01

    APPHi (Automated Photometry Pipeline) carries out aperture and differential photometry of TAOS-II project data. It is computationally efficient and can be used also with other astronomical wide-field image data. APPHi works with large volumes of data and handles both FITS and HDF5 formats. Due the large number of stars that the software has to handle in an enormous number of frames, it is optimized to automatically find the best value for parameters to carry out the photometry, such as mask size for aperture, size of window for extraction of a single star, and the number of counts for the threshold for detecting a faint star. Although intended to work with TAOS-II data, APPHi can analyze any set of astronomical images and is a robust and versatile tool to performing stellar aperture and differential photometry.

  11. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    PubMed

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.

  12. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    PubMed Central

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631

  13. Automatic segmentation of the wire frame of stent grafts from CT data.

    PubMed

    Klein, Almar; van der Vliet, J Adam; Oostveen, Luuk J; Hoogeveen, Yvonne; Kool, Leo J Schultze; Renema, W Klaas Jan; Slump, Cornelis H

    2012-01-01

    Endovascular aortic replacement (EVAR) is an established technique, which uses stent grafts to treat aortic aneurysms in patients at risk of aneurysm rupture. Late stent graft failure is a serious complication in endovascular repair of aortic aneurysms. Better understanding of the motion characteristics of stent grafts will be beneficial for designing future devices. In addition, analysis of stent graft movement in individual patients in vivo can be valuable for predicting stent graft failure in these patients. To be able to gather information on stent graft motion in a quick and robust fashion, we propose an automatic method to segment stent grafts from CT data, consisting of three steps: the detection of seed points, finding the connections between these points to produce a graph, and graph processing to obtain the final geometric model in the form of an undirected graph. Using annotated reference data, the method was optimized and its accuracy was evaluated. The experiments were performed using data containing the AneuRx and Zenith stent grafts. The algorithm is robust for noise and small variations in the used parameter values, does not require much memory according to modern standards, and is fast enough to be used in a clinical setting (65 and 30s for the two stent types, respectively). Further, it is shown that the resulting graphs have a 95% (AneuRx) and 92% (Zenith) correspondence with the annotated data. The geometric model produced by the algorithm allows incorporation of high level information and material properties. This enables us to study the in vivo motions and forces that act on the frame of the stent. We believe that such studies will provide new insights into the behavior of the stent graft in vivo, enables the detection and prediction of stent failure in individual patients, and can help in designing better stent grafts in the future. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Interdependent selves show face-induced facilitation of error processing: cultural neuroscience of self-threat

    PubMed Central

    Kitayama, Shinobu

    2014-01-01

    The fundamentally social nature of humans is revealed in their exquisitely high sensitivity to potentially negative evaluations held by others. At present, however, little is known about neurocortical correlates of the response to such social-evaluative threat. Here, we addressed this issue by showing that mere exposure to an image of a watching face is sufficient to automatically evoke a social-evaluative threat for those who are relatively high in interdependent self-construal. Both European American and Asian participants performed a flanker task while primed with a face (vs control) image. The relative increase of the error-related negativity (ERN) in the face (vs control) priming condition became more pronounced as a function of interdependent (vs independent) self-construal. Relative to European Americans, Asians were more interdependent and, as predicted, they showed a reliably stronger ERN in the face (vs control) priming condition. Our findings suggest that the ERN can serve as a robust empirical marker of self-threat that is closely modulated by socio-cultural variables. PMID:23160814

  15. A group filter algorithm for sea mine detection

    NASA Astrophysics Data System (ADS)

    Cobb, J. Tory; An, Myoung; Tolimieri, Richard

    2005-06-01

    Automatic detection of sea mines in coastal regions is a difficult task due to the highly variable sea bottom conditions present in the underwater environment. Detection systems must be able to discriminate objects which vary in size, shape, and orientation from naturally occurring and man-made clutter. Additionally, these automated systems must be computationally efficient to be incorporated into unmanned underwater vehicle (UUV) sensor systems characterized by high sensor data rates and limited processing abilities. Using noncommutative group harmonic analysis, a fast, robust sea mine detection system is created. A family of unitary image transforms associated to noncommutative groups is generated and applied to side scan sonar image files supplied by Naval Surface Warfare Center Panama City (NSWC PC). These transforms project key image features, geometrically defined structures with orientations, and localized spectral information into distinct orthogonal components or feature subspaces of the image. The performance of the detection system is compared against the performance of an independent detection system in terms of probability of detection (Pd) and probability of false alarm (Pfa).

  16. The discriminatory power of ribotyping as automatable technique for differentiation of bacteria.

    PubMed

    Schumann, Peter; Pukall, Rüdiger

    2013-09-01

    Since the introduction of ribonucleic acid gene restriction patterns as taxonomic tools in 1986, ribotyping has become an established method for systematics, epidemiological, ecological and population studies of microorganisms. In the last 25 years, several modifications have improved the convenience, reproducibility and turn-around time of this technique. The technological development culminated in the automation of ribotyping which allowed for high-throughput applications e.g. in the quality control of food production, pharmaceutical industry and culture collections. The capability of the fully automated RiboPrinter(®) System for the differentiation of bacteria below the species level is compared with the discriminatory power of traditional ribotyping, of molecular fingerprint techniques like PFGE, MLST and MLVA as well as of MALDI-TOF mass spectrometry. While automated RiboPrinting is advantageous with respect to standardization, ease and speed, PCR ribotyping has proved being a highly discriminatory, flexible, robust and cost-efficient routine technique which makes inter-laboratory comparison and build of ribotype databases possible, too. Copyright © 2013 Elsevier GmbH. All rights reserved.

  17. Real-time inspection by submarine images

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Zingaretti, Primo; Conte, Giuseppe

    1996-10-01

    A real-time application of computer vision concerning tracking and inspection of a submarine pipeline is described. The objective is to develop automatic procedures for supporting human operators in the real-time analysis of images acquired by means of cameras mounted on underwater remotely operated vehicles (ROV) Implementation of such procedures gives rise to a human-machine system for underwater pipeline inspection that can automatically detect and signal the presence of the pipe, of its structural or accessory elements, and of dangerous or alien objects in its neighborhood. The possibility of modifying the image acquisition rate in the simulations performed on video- recorded images is used to prove that the system performs all necessary processing with an acceptable robustness working in real-time up to a speed of about 2.5 kn, widely greater than that the actual ROVs and the security features allow.

  18. Overnight non-contact continuous vital signs monitoring using an intelligent automatic beam-steering Doppler sensor at 2.4 GHz.

    PubMed

    Batchu, S; Narasimhachar, H; Mayeda, J C; Hall, T; Lopez, J; Nguyen, T; Banister, R E; Lie, D Y C

    2017-07-01

    Doppler-based non-contact vital signs (NCVS) sensors can monitor heart rates, respiration rates, and motions of patients without physically touching them. We have developed a novel single-board Doppler-based phased-array antenna NCVS biosensor system that can perform robust overnight continuous NCVS monitoring with intelligent automatic subject tracking and optimal beam steering algorithms. Our NCVS sensor achieved overnight continuous vital signs monitoring with an impressive heart-rate monitoring accuracy of over 94% (i.e., within ±5 Beats-Per-Minute vs. a reference sensor), analyzed from over 400,000 data points collected during each overnight monitoring period of ~ 6 hours at a distance of 1.75 meters. The data suggests our intelligent phased-array NCVS sensor can be very attractive for continuous monitoring of low-acuity patients.

  19. Iterative refinement of implicit boundary models for improved geological feature reproduction

    NASA Astrophysics Data System (ADS)

    Martin, Ryan; Boisvert, Jeff B.

    2017-12-01

    Geological domains contain non-stationary features that cannot be described by a single direction of continuity. Non-stationary estimation frameworks generate more realistic curvilinear interpretations of subsurface geometries. A radial basis function (RBF) based implicit modeling framework using domain decomposition is developed that permits introduction of locally varying orientations and magnitudes of anisotropy for boundary models to better account for the local variability of complex geological deposits. The interpolation framework is paired with a method to automatically infer the locally predominant orientations, which results in a rapid and robust iterative non-stationary boundary modeling technique that can refine locally anisotropic geological shapes automatically from the sample data. The method also permits quantification of the volumetric uncertainty associated with the boundary modeling. The methodology is demonstrated on a porphyry dataset and shows improved local geological features.

  20. Recent advances in the Lesser Antilles observatories Part 1 : Seismic Data Acquisition Design based on EarthWorm and SeisComP

    NASA Astrophysics Data System (ADS)

    Saurel, Jean-Marie; Randriamora, Frédéric; Bosson, Alexis; Kitou, Thierry; Vidal, Cyril; Bouin, Marie-Paule; de Chabalier, Jean-Bernard; Clouard, Valérie

    2010-05-01

    Lesser Antilles observatories are in charge of monitoring the volcanoes and earthquakes in the Eastern Caribbean region. During the past two years, our seismic networks have evolved toward a full digital technology. These changes, which include modern three components sensors, high dynamic range digitizers, high speed terrestrial and satellite telemetry, improve data quality but also increase the data flows to process and to store. Moreover, the generalization of data exchange to build a wide virtual seismic network around the Caribbean domain requires a great flexibility to provide and receive data flows in various formats. As many observatories, we have decided to use the most popular and robust open source data acquisition systems in use in today observatories community : EarthWorm and SeisComP. The first is renowned for its ability to process real time seismic data flows, with a high number of tunable modules (filters, triggers, automatic pickers, locators). The later is renowned for its ability to exchange seismic data using the international SEED standard (Standard for Exchange of Earthquake Data), either by producing archive files, or by managing output and input SEEDLink flows. French Antilles Seismological and Volcanological Observatories have chosen to take advantage of the best features of each software to design a new data flow scheme and to integrate it in our global observatory data management system, WebObs [Beauducel et al., 2004]1, see the companion paper (Part 2). We assigned the tasks to the different softwares, regarding their main abilities : - EarthWorm first performs the integration of data from different heterogeneous sources; - SeisComP takes all this homogeneous EarthWorm data flow, adds other sources and produces SEED archives and SEED data flow; - EarthWorm is then used again to process this clean and complete SEEDLink data flow, mainly producing triggers, automatic locations and alarms; - WebObs provides a friendly human interface, both to the administrator for station management, and to the regular user for real time everyday analysis of the seismic data (event classification database, location scripts, automatic shakemaps and regional catalog with associated hypocenter maps).

  1. Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method

    PubMed Central

    Veta, Mitko; van Diest, Paul J.; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P. W.

    2016-01-01

    Background Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. Methods The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an “external” dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. Results The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial agreement with human experts. PMID:27529701

  2. Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method.

    PubMed

    Veta, Mitko; van Diest, Paul J; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P W

    2016-01-01

    Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an "external" dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial agreement with human experts.

  3. Automatic speech recognition and training for severely dysarthric users of assistive technology: the STARDUST project.

    PubMed

    Parker, Mark; Cunningham, Stuart; Enderby, Pam; Hawley, Mark; Green, Phil

    2006-01-01

    The STARDUST project developed robust computer speech recognizers for use by eight people with severe dysarthria and concomitant physical disability to access assistive technologies. Independent computer speech recognizers trained with normal speech are of limited functional use by those with severe dysarthria due to limited and inconsistent proximity to "normal" articulatory patterns. Severe dysarthric output may also be characterized by a small mass of distinguishable phonetic tokens making the acoustic differentiation of target words difficult. Speaker dependent computer speech recognition using Hidden Markov Models was achieved by the identification of robust phonetic elements within the individual speaker output patterns. A new system of speech training using computer generated visual and auditory feedback reduced the inconsistent production of key phonetic tokens over time.

  4. Linking consistency with object/thread semantics - An approach to robust computation

    NASA Technical Reports Server (NTRS)

    Chen, Raymond C.; Dasgupta, Partha

    1989-01-01

    This paper presents an object/thread based paradigm that links data consistency with object/thread semantics. The paradigm can be used to achieve a wide range of consistency semantics from strict atomic transactions to standard process semantics. The paradigm supports three types of data consistency. Object programmers indicate the type of consistency desired on a per-operation basis and the system performs automatic concurrency control and recovery management to ensure that those consistency requirements are met. This allows programmers to customize consistency and recovery on a per-application basis without having to supply complicated, custom recovery management schemes. The paradigm allows robust and nonrobust computation to operate concurrently on the same data in a well defined manner. The operating system needs to support only one vehicle of computation - the thread.

  5. Multi-sensor image registration based on algebraic projective invariants.

    PubMed

    Li, Bin; Wang, Wei; Ye, Hao

    2013-04-22

    A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.

  6. Automatic lumen segmentation in IVOCT images using binary morphological reconstruction

    PubMed Central

    2013-01-01

    Background Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses. Method An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object. Results The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1. Conclusions In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation. PMID:23937790

  7. Automatic first-arrival picking based on extended super-virtual interferometry with quality control procedure

    NASA Astrophysics Data System (ADS)

    An, Shengpei; Hu, Tianyue; Liu, Yimou; Peng, Gengxin; Liang, Xianghao

    2017-12-01

    Static correction is a crucial step of seismic data processing for onshore play, which frequently has a complex near-surface condition. The effectiveness of the static correction depends on an accurate determination of first-arrival traveltimes. However, it is difficult to accurately auto-pick the first arrivals for data with low signal-to-noise ratios (SNR), especially for those measured in the area of the complex near-surface. The technique of the super-virtual interferometry (SVI) has the potential to enhance the SNR of first arrivals. In this paper, we develop the extended SVI with (1) the application of the reverse correlation to improve the capability of SNR enhancement at near-offset, and (2) the usage of the multi-domain method to partially overcome the limitation of current method, given insufficient available source-receiver combinations. Compared to the standard SVI, the SNR enhancement of the extended SVI can be up to 40%. In addition, we propose a quality control procedure, which is based on the statistical characteristics of multichannel recordings of first arrivals. It can auto-correct the mispicks, which might be spurious events generated by the SVI. This procedure is very robust, highly automatic and it can accommodate large data in batches. Finally, we develop one automatic first-arrival picking method to combine the extended SVI and the quality control procedure. Both the synthetic and the field data examples demonstrate that the proposed method is able to accurately auto-pick first arrivals in seismic traces with low SNR. The quality of the stacked seismic sections obtained from this method is much better than those obtained from an auto-picking method, which is commonly employed by the commercial software.

  8. The Automatic Neuroscientist: A framework for optimizing experimental design with closed-loop real-time fMRI

    PubMed Central

    Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R.; Anagnostopoulos, Christoforos; Faisal, Aldo A.; Montana, Giovanni; Leech, Robert

    2016-01-01

    Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. PMID:26804778

  9. An efficient scheme for automatic web pages categorization using the support vector machine

    NASA Astrophysics Data System (ADS)

    Bhalla, Vinod Kumar; Kumar, Neeraj

    2016-07-01

    In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.

  10. Challenges in automatic sorting of construction and demolition waste by hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Hollstein, Frank; Cacho, Íñigo; Arnaiz, Sixto; Wohllebe, Markus

    2016-05-01

    EU-28 countries currently generate 460 Mt/year of construction and demolition waste (C&DW) and the generation rate is expected to reach around 570 Mt/year between 2025 and 2030. There is great potential for recycling C&DW materials since they are massively produced and content valuable resources. But new C&DW is more complex than existing one and there is a need for shifting from traditional recycling approaches to novel recycling solutions. One basic step to achieve this objective is an improvement in (automatic) sorting technology. Hyperspectral Imaging is a promising candidate to support the process. However, the industrial distribution of Hyperspectral Imaging in the C&DW recycling branch is currently insufficiently pronounced due to high investment costs, still insufficient robustness of optical sensor hardware in harsh ambient conditions and, because of the need of sensor fusion, not well-engineered special software methods to perform the (on line) sorting tasks. Thereby frame rates of over 300 Hz are needed for a successful sorting result. Currently the biggest challenges with regard to C&DW detection cover the need of overlapping VIS, NIR and SWIR hyperspectral images in time and space, in particular for selective recognition of contaminated particles. In the study on hand a new approach for hyperspectral imagers is presented by exploiting SWIR hyperspectral information in real time (with 300 Hz). The contribution describes both laboratory results with regard to optical detection of the most important C&DW material composites as well as a development path for an industrial implementation in automatic sorting and separation lines. The main focus is placed on the closure of the two recycling circuits "grey to grey" and "red to red" because of their outstanding potential for sustainability in conservation of construction resources.

  11. The Automatic Neuroscientist: A framework for optimizing experimental design with closed-loop real-time fMRI.

    PubMed

    Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R; Anagnostopoulos, Christoforos; Faisal, Aldo A; Montana, Giovanni; Leech, Robert

    2016-04-01

    Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Decentralized adaptive robust control based on sliding mode and nonlinear compensator for the control of ankle movement using functional electrical stimulation of agonist-antagonist muscles

    NASA Astrophysics Data System (ADS)

    Kobravi, Hamid-Reza; Erfanian, Abbas

    2009-08-01

    A decentralized control methodology is designed for the control of ankle dorsiflexion and plantarflexion in paraplegic subjects with electrical stimulation of tibialis anterior and calf muscles. Each muscle joint is considered as a subsystem and individual controllers are designed for each subsystem. Each controller operates solely on its associated subsystem, with no exchange of information between the subsystems. The interactions between the subsystems are taken as external disturbances for each isolated subsystem. In order to achieve robustness with respect to external disturbances, unmodeled dynamics, model uncertainty and time-varying properties of muscle-joint dynamics, a robust control framework is proposed which is based on the synergistic combination of an adaptive nonlinear compensator with a sliding mode control and is referred to as an adaptive robust control. Extensive simulations and experiments on healthy and paraplegic subjects were performed to demonstrate the robustness against the time-varying properties of muscle-joint dynamics, day-to-day variations, subject-to-subject variations, fast convergence, stability and tracking accuracy of the proposed method. The results indicate that the decentralized robust control provides excellent tracking control for different reference trajectories and can generate control signals to compensate the muscle fatigue and reject the external disturbance. Moreover, the controller is able to automatically regulate the interaction between agonist and antagonist muscles under different conditions of operating without any preprogrammed antagonist activities.

  13. Decentralized adaptive robust control based on sliding mode and nonlinear compensator for the control of ankle movement using functional electrical stimulation of agonist-antagonist muscles.

    PubMed

    Kobravi, Hamid-Reza; Erfanian, Abbas

    2009-08-01

    A decentralized control methodology is designed for the control of ankle dorsiflexion and plantarflexion in paraplegic subjects with electrical stimulation of tibialis anterior and calf muscles. Each muscle joint is considered as a subsystem and individual controllers are designed for each subsystem. Each controller operates solely on its associated subsystem, with no exchange of information between the subsystems. The interactions between the subsystems are taken as external disturbances for each isolated subsystem. In order to achieve robustness with respect to external disturbances, unmodeled dynamics, model uncertainty and time-varying properties of muscle-joint dynamics, a robust control framework is proposed which is based on the synergistic combination of an adaptive nonlinear compensator with a sliding mode control and is referred to as an adaptive robust control. Extensive simulations and experiments on healthy and paraplegic subjects were performed to demonstrate the robustness against the time-varying properties of muscle-joint dynamics, day-to-day variations, subject-to-subject variations, fast convergence, stability and tracking accuracy of the proposed method. The results indicate that the decentralized robust control provides excellent tracking control for different reference trajectories and can generate control signals to compensate the muscle fatigue and reject the external disturbance. Moreover, the controller is able to automatically regulate the interaction between agonist and antagonist muscles under different conditions of operating without any preprogrammed antagonist activities.

  14. AssayR: A Simple Mass Spectrometry Software Tool for Targeted Metabolic and Stable Isotope Tracer Analyses.

    PubMed

    Wills, Jimi; Edwards-Hicks, Joy; Finch, Andrew J

    2017-09-19

    Metabolic analyses generally fall into two classes: unbiased metabolomic analyses and analyses that are targeted toward specific metabolites. Both techniques have been revolutionized by the advent of mass spectrometers with detectors that afford high mass accuracy and resolution, such as time-of-flights (TOFs) and Orbitraps. One particular area where this technology is key is in the field of metabolic flux analysis because the resolution of these spectrometers allows for discrimination between 13 C-containing isotopologues and those containing 15 N or other isotopes. While XCMS-based software is freely available for untargeted analysis of mass spectrometric data sets, it does not always identify metabolites of interest in a targeted assay. Furthermore, there is a paucity of vendor-independent software that deals with targeted analyses of metabolites and of isotopologues in particular. Here, we present AssayR, an R package that takes high resolution wide-scan liquid chromatography-mass spectrometry (LC-MS) data sets and tailors peak detection for each metabolite through a simple, iterative user interface. It automatically integrates peak areas for all isotopologues and outputs extracted ion chromatograms (EICs), absolute and relative stacked bar charts for all isotopologues, and a .csv data file. We demonstrate several examples where AssayR provides more accurate and robust quantitation than XCMS, and we propose that tailored peak detection should be the preferred approach for targeted assays. In summary, AssayR provides easy and robust targeted metabolite and stable isotope analyses on wide-scan data sets from high resolution mass spectrometers.

  15. AssayR: A Simple Mass Spectrometry Software Tool for Targeted Metabolic and Stable Isotope Tracer Analyses

    PubMed Central

    2017-01-01

    Metabolic analyses generally fall into two classes: unbiased metabolomic analyses and analyses that are targeted toward specific metabolites. Both techniques have been revolutionized by the advent of mass spectrometers with detectors that afford high mass accuracy and resolution, such as time-of-flights (TOFs) and Orbitraps. One particular area where this technology is key is in the field of metabolic flux analysis because the resolution of these spectrometers allows for discrimination between 13C-containing isotopologues and those containing 15N or other isotopes. While XCMS-based software is freely available for untargeted analysis of mass spectrometric data sets, it does not always identify metabolites of interest in a targeted assay. Furthermore, there is a paucity of vendor-independent software that deals with targeted analyses of metabolites and of isotopologues in particular. Here, we present AssayR, an R package that takes high resolution wide-scan liquid chromatography–mass spectrometry (LC-MS) data sets and tailors peak detection for each metabolite through a simple, iterative user interface. It automatically integrates peak areas for all isotopologues and outputs extracted ion chromatograms (EICs), absolute and relative stacked bar charts for all isotopologues, and a .csv data file. We demonstrate several examples where AssayR provides more accurate and robust quantitation than XCMS, and we propose that tailored peak detection should be the preferred approach for targeted assays. In summary, AssayR provides easy and robust targeted metabolite and stable isotope analyses on wide-scan data sets from high resolution mass spectrometers. PMID:28850215

  16. A method for fast automated microscope image stitching.

    PubMed

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Automatic and Robust Delineation of the Fiducial Points of the Seismocardiogram Signal for Non-invasive Estimation of Cardiac Time Intervals.

    PubMed

    Khosrow-Khavar, Farzad; Tavakolian, Kouhyar; Blaber, Andrew; Menon, Carlo

    2016-10-12

    The purpose of this research was to design a delineation algorithm that could detect specific fiducial points of the seismocardiogram (SCG) signal with or without using the electrocardiogram (ECG) R-wave as the reference point. The detected fiducial points were used to estimate cardiac time intervals. Due to complexity and sensitivity of the SCG signal, the algorithm was designed to robustly discard the low-quality cardiac cycles, which are the ones that contain unrecognizable fiducial points. The algorithm was trained on a dataset containing 48,318 manually annotated cardiac cycles. It was then applied to three test datasets: 65 young healthy individuals (dataset 1), 15 individuals above 44 years old (dataset 2), and 25 patients with previous heart conditions (dataset 3). The algorithm accomplished high prediction accuracy with the rootmean- square-error of less than 5 ms for all the test datasets. The algorithm overall mean detection rate per individual recordings (DRI) were 74, 68, and 42 percent for the three test datasets when concurrent ECG and SCG were used. For the standalone SCG case, the mean DRI was 32, 14 and 21 percent. When the proposed algorithm applied to concurrent ECG and SCG signals, the desired fiducial points of the SCG signal were successfully estimated with a high detection rate. For the standalone case, however, the algorithm achieved high prediction accuracy and detection rate for only the young individual dataset. The presented algorithm could be used for accurate and non-invasive estimation of cardiac time intervals.

  18. Working memory load modulates the neural response to other's pain: Evidence from an ERP study.

    PubMed

    Cui, Fang; Zhu, Xiangru; Luo, Yuejia; Cheng, Jiaping

    2017-03-22

    The present study investigated the time course of processing other's pain under different conditions of working memory (WM) load. Event-related potentials (ERPs) were recorded while the participants held two digits (low WM load) or six digits (high WM load) in WM and viewed pictures that showed others who were in painful or non-painful situations. Robust WM-load×Picture interactions were found for the N2 and LPP components. In the high WM-load condition, painful pictures elicited significantly larger amplitudes than non-painful pictures. In the low WM load condition, the difference between the painful and non-painful pictures was not significant. These ERP results indicate that WM load can influence both the early automatic N2 component and late cognitive LPP component. Compared with high WM load, low WM load reduced affective arousal and emotional sharing in response to other's pain and weakened the cognitive evaluation of task irrelevant stimuli. These findings are explained from the load theory perspective. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Detection and measurement of fetal anatomies from ultrasound images using a constrained probabilistic boosting tree.

    PubMed

    Carneiro, Gustavo; Georgescu, Bogdan; Good, Sara; Comaniciu, Dorin

    2008-09-01

    We propose a novel method for the automatic detection and measurement of fetal anatomical structures in ultrasound images. This problem offers a myriad of challenges, including: difficulty of modeling the appearance variations of the visual object of interest, robustness to speckle noise and signal dropout, and large search space of the detection procedure. Previous solutions typically rely on the explicit encoding of prior knowledge and formulation of the problem as a perceptual grouping task solved through clustering or variational approaches. These methods are constrained by the validity of the underlying assumptions and usually are not enough to capture the complex appearances of fetal anatomies. We propose a novel system for fast automatic detection and measurement of fetal anatomies that directly exploits a large database of expert annotated fetal anatomical structures in ultrasound images. Our method learns automatically to distinguish between the appearance of the object of interest and background by training a constrained probabilistic boosting tree classifier. This system is able to produce the automatic segmentation of several fetal anatomies using the same basic detection algorithm. We show results on fully automatic measurement of biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), femur length (FL), humerus length (HL), and crown rump length (CRL). Notice that our approach is the first in the literature to deal with the HL and CRL measurements. Extensive experiments (with clinical validation) show that our system is, on average, close to the accuracy of experts in terms of segmentation and obstetric measurements. Finally, this system runs under half second on a standard dual-core PC computer.

  20. Automatic segmentation and supervised learning-based selection of nuclei in cancer tissue images.

    PubMed

    Nandy, Kaustav; Gudla, Prabhakar R; Amundsen, Ryan; Meaburn, Karen J; Misteli, Tom; Lockett, Stephen J

    2012-09-01

    Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. Published 2012 Wiley Periodicals, Inc.

  1. Automatic physical inference with information maximizing neural networks

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.

  2. An Underwater Target Detection System for Electro-Optical Imagery Data

    DTIC Science & Technology

    2010-06-01

    detection and segmentation of underwater mine-like objects in the EO images captured with a CCD-based image sensor. The main focus of this research is to...develop a robust detection algorithm that can be used to detect low contrast and partial underwater objects from the EO imagery with low false alarm rate...underwater target detection I. INTRODUCTION Automatic detection and recognition of underwater objects from EO imagery poses a serious challenge due to poor

  3. Fully automated motion correction in first-pass myocardial perfusion MR image sequences.

    PubMed

    Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2008-11-01

    This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.

  4. Computer aided segmentation of kidneys using locally shape constrained deformable models on CT images

    NASA Astrophysics Data System (ADS)

    Erdt, Marius; Sakas, Georgios

    2010-03-01

    This work presents a novel approach for model based segmentation of the kidney in images acquired by Computed Tomography (CT). The developed computer aided segmentation system is expected to support computer aided diagnosis and operation planning. We have developed a deformable model based approach based on local shape constraints that prevents the model from deforming into neighboring structures while allowing the global shape to adapt freely to the data. Those local constraints are derived from the anatomical structure of the kidney and the presence and appearance of neighboring organs. The adaptation process is guided by a rule-based deformation logic in order to improve the robustness of the segmentation in areas of diffuse organ boundaries. Our work flow consists of two steps: 1.) a user guided positioning and 2.) an automatic model adaptation using affine and free form deformation in order to robustly extract the kidney. In cases which show pronounced pathologies, the system also offers real time mesh editing tools for a quick refinement of the segmentation result. Evaluation results based on 30 clinical cases using CT data sets show an average dice correlation coefficient of 93% compared to the ground truth. The results are therefore in most cases comparable to manual delineation. Computation times of the automatic adaptation step are lower than 6 seconds which makes the proposed system suitable for an application in clinical practice.

  5. Effect of Extended State Observer and Automatic Voltage Regulator on Synchronous Machine Connected to Infinite Bus Power System

    NASA Astrophysics Data System (ADS)

    Angu, Rittu; Mehta, R. K.

    2018-04-01

    This paper presents a robust controller known as Extended State Observer (ESO) in order to improve the stability and voltage regulation of a synchronous machine connected to an infinite bus power system through a transmission line. The ESO-based control scheme is implemented with an automatic voltage regulator in conjunction with an excitation system to enhance the damping of low frequency power system oscillations, as the Power System Stabilizer (PSS) does. The implementation of PSS excitation control techniques however requires reliable information about the entire states, though they are not always directly measureable. To address this issue, the proposed ESO provides the estimate of system states as well as disturbance state together in order to improve not only the damping but also compensates system efficiently in presence of parameter uncertainties and external disturbances. The Closed-Loop Poles (CLPs) of the system have been assigned by the symmetric root locus technique, with the desired level of system damping provided by the dominant CLPs. The performance of the system is analyzed through simulating at different operating conditions. The control method is not only capable of providing zero estimation error in steady-state, but also shows robustness in tracking the reference command under parametric variations and external disturbances. Illustrative examples have been provided to demonstrate the effectiveness of the developed methodology.

  6. Real-time segmentation of burst suppression patterns in critical care EEG monitoring

    PubMed Central

    Westover, M. Brandon; Shafi, Mouhsin M.; Ching, ShiNung; Chemali, Jessica J.; Purdon, Patrick L.; Cash, Sydney S.; Brown, Emery N.

    2014-01-01

    Objective Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. Methods A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Results Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Conclusions Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Significance Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. PMID:23891828

  7. Real-time segmentation of burst suppression patterns in critical care EEG monitoring.

    PubMed

    Brandon Westover, M; Shafi, Mouhsin M; Ching, Shinung; Chemali, Jessica J; Purdon, Patrick L; Cash, Sydney S; Brown, Emery N

    2013-09-30

    Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Real time health monitoring and control system methodology for flexible space structures

    NASA Astrophysics Data System (ADS)

    Jayaram, Sanjay

    This dissertation is concerned with the Near Real-time Autonomous Health Monitoring of Flexible Space Structures. The dynamics of multi-body flexible systems is uncertain due to factors such as high non-linearity, consideration of higher modal frequencies, high dimensionality, multiple inputs and outputs, operational constraints, as well as unexpected failures of sensors and/or actuators. Hence a systematic framework of developing a high fidelity, dynamic model of a flexible structural system needs to be understood. The fault detection mechanism that will be an integrated part of an autonomous health monitoring system comprises the detection of abnormalities in the sensors and/or actuators and correcting these detected faults (if possible). Applying the robust control law and the robust measures that are capable of detecting and recovering/replacing the actuators rectifies the actuator faults. The fault tolerant concept applied to the sensors will be in the form of an Extended Kalman Filter (EKF). The EKF is going to weigh the information coming from multiple sensors (redundant sensors used to measure the same information) and automatically identify the faulty sensors and weigh the best estimate from the remaining sensors. The mechanization is comprised of instrumenting flexible deployable panels (solar array) with multiple angular position and rate sensors connected to the data acquisition system. The sensors will give position and rate information of the solar panel in all three axes (i.e. roll, pitch and yaw). The position data corresponds to the steady state response and the rate data will give better insight on the transient response of the system. This is a critical factor for real-time autonomous health monitoring. MATLAB (and/or C++) software will be used for high fidelity modeling and fault tolerant mechanism.

  9. Robust vehicle detection under various environmental conditions using an infrared thermal camera and its application to road traffic flow monitoring.

    PubMed

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2013-06-17

    We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as "our previous method") using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as "our new method"). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal.

  10. Automatic detection of retina disease: robustness to image quality and localization of anatomy structure.

    PubMed

    Karnowski, T P; Aykac, D; Giancardo, L; Li, Y; Nichols, T; Tobin, K W; Chaum, E

    2011-01-01

    The automated detection of diabetic retinopathy and other eye diseases in images of the retina has great promise as a low-cost method for broad-based screening. Many systems in the literature which perform automated detection include a quality estimation step and physiological feature detection, including the vascular tree and the optic nerve / macula location. In this work, we study the robustness of an automated disease detection method with respect to the accuracy of the optic nerve location and the quality of the images obtained as judged by a quality estimation algorithm. The detection algorithm features microaneurysm and exudate detection followed by feature extraction on the detected population to describe the overall retina image. Labeled images of retinas ground-truthed to disease states are used to train a supervised learning algorithm to identify the disease state of the retina image and exam set. Under the restrictions of high confidence optic nerve detections and good quality imagery, the system achieves a sensitivity and specificity of 94.8% and 78.7% with area-under-curve of 95.3%. Analysis of the effect of constraining quality and the distinction between mild non-proliferative diabetic retinopathy, normal retina images, and more severe disease states is included.

  11. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion

    PubMed Central

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-01-01

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145

  12. Continuous robust sound event classification using time-frequency features and deep learning

    PubMed Central

    Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478

  13. Continuous robust sound event classification using time-frequency features and deep learning.

    PubMed

    McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.

  14. Monitoring of pipeline ruptures by means of a Robust Satellite Technique (RST)

    NASA Astrophysics Data System (ADS)

    Filizzola, C.; Baldassarre, G.; Corrado, R.; Mazzeo, G.; Marchese, F.; Paciello, R.; Pergola, N.; Tramutoli, V.

    2009-04-01

    Pipeline ruptures have deep economic and ecologic consequences so that pipeline networks represent critical infrastructures to be carefully monitored particularly in areas which are frequently affected by natural disasters like earthquakes, hurricanes, landslide, etc. In order to minimize damages, the detection of harmful events along pipelines should be as rapid as possible and, at the same time, what is detected should be an actual incident and not a false alarm. In this work, a Robust Satellite Technique (RST), already applied to the prevision and NRT (Near Real Time) monitoring of major natural and environmental hazards (such as seismically active areas, volcanic activity, hydrological risk, forest fires and oil spills) has been employed to automatically identify, from satellite, anomalous Thermal Infrared (TIR) transients related to explosions of oil/gas pipelines. In this context, the combination of the RST approach with high temporal resolution, offered by geostationary satellites, seems to assure both a reliable and timely detection of such events. The potentials of the technique (applied to MSG-SEVIRI data) were tested over Iraq, a region which is sadly known for the numerous (mainly manmade) accidents to pipelines, in order to have a simulation of the effects (such as fires or explosions near or directly involving a pipeline facility) due to natural disasters.

  15. Extreme Sparse Multinomial Logistic Regression: A Fast and Robust Framework for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen

    2017-12-01

    Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.

  16. Differentiating closed-loop cortical intention from rest: building an asynchronous electrocorticographic BCI.

    PubMed

    Williams, Jordan J; Rouse, Adam G; Thongpang, Sanitta; Williams, Justin C; Moran, Daniel W

    2013-08-01

    Recent experiments have shown that electrocorticography (ECoG) can provide robust control signals for a brain-computer interface (BCI). Strategies that attempt to adapt a BCI control algorithm by learning from past trials often assume that the subject is attending to each training trial. Likewise, automatic disabling of movement control would be desirable during resting periods when random brain fluctuations might cause unintended movements of a device. To this end, our goal was to identify ECoG differences that arise between periods of active BCI use and rest. We examined spectral differences in multi-channel, epidural micro-ECoG signals recorded from non-human primates when rest periods were interleaved between blocks of an active BCI control task. Post-hoc analyses demonstrated that these states can be decoded accurately on both a trial-by-trial and real-time basis, and this discriminability remains robust over a period of weeks. In addition, high gamma frequencies showed greater modulation with desired movement direction, while lower frequency components demonstrated greater amplitude differences between task and rest periods, suggesting possible specialized BCI roles for these frequencies. The results presented here provide valuable insight into the neurophysiology of BCI control as well as important considerations toward the design of an asynchronous BCI system.

  17. Probabilistic n/γ discrimination with robustness against outliers for use in neutron profile monitors

    NASA Astrophysics Data System (ADS)

    Uchida, Y.; Takada, E.; Fujisaki, A.; Kikuchi, T.; Ogawa, K.; Isobe, M.

    2017-08-01

    A method to stochastically discriminate neutron and γ-ray signals measured with a stilbene organic scintillator is proposed. Each pulse signal was stochastically categorized into two groups: neutron and γ-ray. In previous work, the Expectation Maximization (EM) algorithm was used with the assumption that the measured data followed a Gaussian mixture distribution. It was shown that probabilistic discrimination between these groups is possible. Moreover, by setting the initial parameters for the Gaussian mixture distribution with a k-means algorithm, the possibility of automatic discrimination was demonstrated. In this study, the Student's t-mixture distribution was used as a probabilistic distribution with the EM algorithm to improve the robustness against the effect of outliers caused by pileup of the signals. To validate the proposed method, the figures of merit (FOMs) were compared for the EM algorithm assuming a t-mixture distribution and a Gaussian mixture distribution. The t-mixture distribution resulted in an improvement of the FOMs compared with the Gaussian mixture distribution. The proposed data processing technique is a promising tool not only for neutron and γ-ray discrimination in fusion experiments but also in other fields, for example, homeland security, cancer therapy with high energy particles, nuclear reactor decommissioning, pattern recognition, and so on.

  18. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    PubMed

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  19. Differentiating closed-loop cortical intention from rest: building an asynchronous electrocorticographic BCI

    NASA Astrophysics Data System (ADS)

    Williams, Jordan J.; Rouse, Adam G.; Thongpang, Sanitta; Williams, Justin C.; Moran, Daniel W.

    2013-08-01

    Objective. Recent experiments have shown that electrocorticography (ECoG) can provide robust control signals for a brain-computer interface (BCI). Strategies that attempt to adapt a BCI control algorithm by learning from past trials often assume that the subject is attending to each training trial. Likewise, automatic disabling of movement control would be desirable during resting periods when random brain fluctuations might cause unintended movements of a device. To this end, our goal was to identify ECoG differences that arise between periods of active BCI use and rest. Approach. We examined spectral differences in multi-channel, epidural micro-ECoG signals recorded from non-human primates when rest periods were interleaved between blocks of an active BCI control task. Main Results. Post-hoc analyses demonstrated that these states can be decoded accurately on both a trial-by-trial and real-time basis, and this discriminability remains robust over a period of weeks. In addition, high gamma frequencies showed greater modulation with desired movement direction, while lower frequency components demonstrated greater amplitude differences between task and rest periods, suggesting possible specialized BCI roles for these frequencies. Significance. The results presented here provide valuable insight into the neurophysiology of BCI control as well as important considerations toward the design of an asynchronous BCI system.

  20. An efficient adaptive sampling strategy for global surrogate modeling with applications in multiphase flow simulation

    NASA Astrophysics Data System (ADS)

    Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.

    2016-12-01

    Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.

  1. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing

    PubMed Central

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246

  2. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    PubMed

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  3. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  4. A robust, high-throughput method for computing maize ear, cob, and kernel attributes automatically from images.

    PubMed

    Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P

    2017-01-01

    Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository. © 2016 The Authors. The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.

  5. MO-FG-CAMPUS-TeP3-01: A Model of Baseline Shift to Improve Robustness of Proton Therapy Treatments of Moving Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, K; Barragan Montero, A; Di Perri, D

    Purpose: The shift in mean position of a moving tumor also known as “baseline shift”, has been modeled, in order to automatically generate uncertainty scenarios for the assessment and robust optimization of proton therapy treatments in lung cancer. Methods: An average CT scan and a Mid-Position CT scan (MidPCT) of the patient at the planning time are first generated from a 4D-CT data. The mean position of the tumor along the breathing cycle is represented by the GTV contour in the MidPCT. Several studies reported both systematic and random variations of the mean tumor position from fraction to fraction. Ourmore » model can simulate this baseline shift by generating a local deformation field that moves the tumor on all phases of the 4D-CT, without creating any non-physical artifact. The deformation field is comprised of normal and tangential components with respect to the lung wall in order to allow the tumor to slip within the lung instead of deforming the lung surface. The deformation field is eventually smoothed in order to enforce its continuity. Two 4D-CT series acquired at 1 week of interval were used to validate the model. Results: Based on the first 4D-CT set, the model was able to generate a third 4D-CT that reproduced the 5.8 mm baseline-shift measured in the second 4D-CT. Water equivalent thickness (WET) of the voxels have been computed for the 3 average CTs. The root mean square deviation of the WET in the GTV is 0.34 mm between week 1 and week 2, and 0.08 mm between the simulated data and week 2. Conclusion: Our model can be used to automatically generate uncertainty scenarios for robustness analysis of a proton therapy plan. The generated scenarios can also feed a TPS equipped with a robust optimizer. Kevin Souris, Ana Barragan, and Dario Di Perri are financially supported by Televie Grants from F.R.S.-FNRS.« less

  6. Variability and robustness of scatterers in HRR/ISAR ground target data and its influence on the ATR performance

    NASA Astrophysics Data System (ADS)

    Schumacher, R.; Schimpf, H.; Schiller, J.

    2011-06-01

    The most challenging problem of Automatic Target Recognition (ATR) is the extraction of robust and independent target features which describe the target unambiguously. These features have to be robust and invariant in different senses: in time, between aspect views (azimuth and elevation angle), between target motion (translation and rotation) and between different target variants. Especially for ground moving targets in military applications an irregular target motion is typical, so that a strong variation of the backscattered radar signal with azimuth and elevation angle makes the extraction of stable and robust features most difficult. For ATR based on High Range Resolution (HRR) profiles and / or Inverse Synthetic Aperture Radar (ISAR) images it is crucial that the reference dataset consists of stable and robust features, which, among others, will depend on the target aspect and depression angle amongst others. Here it is important to find an adequate data grid for an efficient data coverage in the reference dataset for ATR. In this paper the variability of the backscattered radar signals of target scattering centers is analyzed for different HRR profiles and ISAR images from measured turntable datasets of ground targets under controlled conditions. Especially the dependency of the features on the elevation angle is analyzed regarding to the ATR of large strip SAR data with a large range of depression angles by using available (I)SAR datasets as reference. In this work the robustness of these scattering centers is analyzed by extracting their amplitude, phase and position. Therefore turntable measurements under controlled conditions were performed targeting an artificial military reference object called STANDCAM. Measures referring to variability, similarity, robustness and separability regarding the scattering centers are defined. The dependency of the scattering behaviour with respect to azimuth and elevation variations is analyzed. Additionally generic types of features (geometrical, statistical), which can be derived especially from (I)SAR images, are applied to the ATR-task. Therefore subsequently the dependence of individual feature values as well as the feature statistics on aspect (i.e. azimuth and elevation) are presented. The Kolmogorov-Smirnov distance will be used to show how the feature statistics is influenced by varying elevation angles. Finally, confusion matrices are computed between the STANDCAM target at all eleven elevation angles. This helps to assess the robustness of ATR performance under the influence of aspect angle deviations between training set and test set.

  7. Invited review: technical solutions for analysis of milk constituents and abnormal milk.

    PubMed

    Brandt, M; Haeussermann, A; Hartung, E

    2010-02-01

    Information about constituents of milk and visual alterations can be used for management support in improving mastitis detection, monitoring fertility and reproduction, and adapting individual diets. Numerous sensors that gather this information are either currently available or in development. Nevertheless, there is still a need to adapt these sensors to special requirements of on-farm utilization such as robustness, calibration and maintenance, costs, operating cycle duration, and high sensitivity and specificity. This paper provides an overview of available sensors, ongoing research, and areas of application for analysis of milk constituents. Currently, the recognition of abnormal milk and the control of udder health is achieved mainly by recording electrical conductivity and changes in milk color. Further indicators of inflammation were recently investigated either to satisfy the high specificity necessary for automatic separation of milk or to create reliable alarm lists. Likewise, milk composition, especially fat:protein ratio, milk urea nitrogen content, and concentration of ketone bodies, provides suitable information about energy and protein supply, roughage fraction in the diet, and metabolic imbalances in dairy cows. In this regard, future prospects are to use frequent on-farm measurements of milk constituents for short-term automatic nutritional management. Finally, measuring progesterone concentration in milk helps farmers detect ovulation, pregnancy, and infertility. Monitoring systems for on-farm or on-line analysis of milk composition are mostly based on infrared spectroscopy, optical methods, biosensors, or sensor arrays. Their calibration and maintenance requirements have to be checked thoroughly before they can be regularly implemented on dairy farms. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Automatic PSO-Based Deformable Structures Markerless Tracking in Laparoscopic Cholecystectomy

    NASA Astrophysics Data System (ADS)

    Djaghloul, Haroun; Batouche, Mohammed; Jessel, Jean-Pierre

    An automatic and markerless tracking method of deformable structures (digestive organs) during laparoscopic cholecystectomy intervention that uses the (PSO) behavour and the preoperative a priori knowledge is presented. The associated shape to the global best particles of the population determines a coarse representation of the targeted organ (the gallbladder) in monocular laparoscopic colored images. The swarm behavour is directed by a new fitness function to be optimized to improve the detection and tracking performance. The function is defined by a linear combination of two terms, namely, the human a priori knowledge term (H) and the particle's density term (D). Under the limits of standard (PSO) characteristics, experimental results on both synthetic and real data show the effectiveness and robustness of our method. Indeed, it outperforms existing methods without need of explicit initialization (such as active contours, deformable models and Gradient Vector Flow) on accuracy and convergence rate.

  9. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  10. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    PubMed

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  11. An effective non-rigid registration approach for ultrasound image based on "demons" algorithm.

    PubMed

    Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong; Tian, Jiawei

    2013-06-01

    Medical image registration is an important component of computer-aided diagnosis system in diagnostics, therapy planning, and guidance of surgery. Because of its low signal/noise ratio (SNR), ultrasound (US) image registration is a difficult task. In this paper, a fully automatic non-rigid image registration algorithm based on demons algorithm is proposed for registration of ultrasound images. In the proposed method, an "inertia force" derived from the local motion trend of pixels in a Moore neighborhood system is produced and integrated into optical flow equation to estimate the demons force, which is helpful to handle the speckle noise and preserve the geometric continuity of US images. In the experiment, a series of US images and several similarity measure metrics are utilized for evaluating the performance. The experimental results demonstrate that the proposed method can register ultrasound images efficiently, robust to noise, quickly and automatically.

  12. An automatic approach to exclude interlopers from asteroid families

    NASA Astrophysics Data System (ADS)

    Radović, Viktor; Novaković, Bojan; Carruba, Valerio; Marčeta, Dušan

    2017-09-01

    Asteroid families are a valuable source of information to many asteroid-related researches, assuming a reliable list of their members could be obtained. However, as the number of known asteroids increases fast it becomes more and more difficult to obtain a robust list of members of an asteroid family. Here, we are proposing a new approach to deal with the problem, based on the well-known hierarchical clustering method. An additional step in the whole procedure is introduced in order to reduce a so-called chaining effect. The main idea is to prevent chaining through an already identified interloper. We show that in this way a number of potential interlopers among family members is significantly reduced. Moreover, we developed an automatic online-based portal to apply this procedure, I.e. to generate a list of family members as well as a list of potential interlopers. The Asteroid Families Portal is freely available to all interested researchers.

  13. A fully-automatic fast segmentation of the sub-basal layer nerves in corneal images.

    PubMed

    Guimarães, Pedro; Wigdahl, Jeff; Poletti, Enea; Ruggeri, Alfredo

    2014-01-01

    Corneal nerves changes have been linked to damage caused by surgical interventions or prolonged contact lens wear. Furthermore nerve tortuosity has been shown to correlate with the severity of diabetic neuropathy. For these reasons there has been an increasing interest on the analysis of these structures. In this work we propose a novel, robust, and fast fully automatic algorithm capable of tracing the sub-basal plexus nerves from human corneal confocal images. We resort to logGabor filters and support vector machines to trace the corneal nerves. The proposed algorithm traced most of the corneal nerves correctly (sensitivity of 0.88 ± 0.06 and false discovery rate of 0.08 ± 0.06). The displayed performance is comparable to a human grader. We believe that the achieved processing time (0.661 ± 0.07 s) and tracing quality are major advantages for the daily clinical practice.

  14. A vibration-based health monitoring program for a large and seismically vulnerable masonry dome

    NASA Astrophysics Data System (ADS)

    Pecorelli, M. L.; Ceravolo, R.; De Lucia, G.; Epicoco, R.

    2017-05-01

    Vibration-based health monitoring of monumental structures must rely on efficient and, as far as possible, automatic modal analysis procedures. Relatively low excitation energy provided by traffic, wind and other sources is usually sufficient to detect structural changes, as those produced by earthquakes and extreme events. Above all, in-operation modal analysis is a non-invasive diagnostic technique that can support optimal strategies for the preservation of architectural heritage, especially if complemented by model-driven procedures. In this paper, the preliminary steps towards a fully automated vibration-based monitoring of the world’s largest masonry oval dome (internal axes of 37.23 by 24.89 m) are presented. More specifically, the paper reports on signal treatment operations conducted to set up the permanent dynamic monitoring system of the dome and to realise a robust automatic identification procedure. Preliminary considerations on the effects of temperature on dynamic parameters are finally reported.

  15. Automatic detection of spermatozoa for laser capture microdissection.

    PubMed

    Vandewoestyne, Mado; Van Hoofstat, David; Van Nieuwerburgh, Filip; Deforce, Dieter

    2009-03-01

    In sexual assault crimes, differential extraction of spermatozoa from vaginal swab smears is often ineffective, especially when only a few spermatozoa are present in an overwhelming amount of epithelial cells. Laser capture microdissection (LCM) enables the precise separation of spermatozoa and epithelial cells. However, standard sperm-staining techniques are non-specific and rely on sperm morphology for identification. Moreover, manual screening of the microscope slides is time-consuming and labor-intensive. Here, we describe an automated screening method to detect spermatozoa stained with Sperm HY-LITER. Different ratios of spermatozoa and epithelial cells were used to assess the automatic detection method. In addition, real postcoital samples were also screened. Detected spermatozoa were isolated using LCM and DNA analysis was performed. Robust DNA profiles without allelic dropout could be obtained from as little as 30 spermatozoa recovered from postcoital samples, showing that the staining had no significant influence on DNA recovery.

  16. A new blood vessel extraction technique using edge enhancement and object classification.

    PubMed

    Badsha, Shahriar; Reza, Ahmed Wasif; Tan, Kim Geok; Dimyati, Kaharudin

    2013-12-01

    Diabetic retinopathy (DR) is increasing progressively pushing the demand of automatic extraction and classification of severity of diseases. Blood vessel extraction from the fundus image is a vital and challenging task. Therefore, this paper presents a new, computationally simple, and automatic method to extract the retinal blood vessel. The proposed method comprises several basic image processing techniques, namely edge enhancement by standard template, noise removal, thresholding, morphological operation, and object classification. The proposed method has been tested on a set of retinal images. The retinal images were collected from the DRIVE database and we have employed robust performance analysis to evaluate the accuracy. The results obtained from this study reveal that the proposed method offers an average accuracy of about 97 %, sensitivity of 99 %, specificity of 86 %, and predictive value of 98 %, which is superior to various well-known techniques.

  17. The NavTrax fleet management system

    NASA Astrophysics Data System (ADS)

    McLellan, James F.; Krakiwsky, Edward J.; Schleppe, John B.; Knapp, Paul L.

    The NavTrax System, a dispatch-type automatic vehicle location and navigation system, is discussed. Attention is given to its positioning, communication, digital mapping, and dispatch center components. The positioning module is a robust GPS (Global Positioning System)-based system integrated with dead reckoning devices by a decentralized-federated filter, making the module fault tolerant. The error behavior and characteristics of GPS, rate gyro, compass, and odometer sensors are discussed. The communications module, as presently configured, utilizes UHF radio technology, and plans are being made to employ a digital cellular telephone system. Polling and automatic smart vehicle reporting are also discussed. The digital mapping component is an intelligent digital single line road network database stored in vector form with full connectivity and address ranges. A limited form of map matching is performed for the purposes of positioning, but its main purpose is to define location once position is determined.

  18. Geometrical pose and structural estimation from a single image for automatic inspection of filter components

    NASA Astrophysics Data System (ADS)

    Liu, Yonghuai; Rodrigues, Marcos A.

    2000-03-01

    This paper describes research on the application of machine vision techniques to a real time automatic inspection task of air filter components in a manufacturing line. A novel calibration algorithm is proposed based on a special camera setup where defective items would show a large calibration error. The algorithm makes full use of rigid constraints derived from the analysis of geometrical properties of reflected correspondence vectors which have been synthesized into a single coordinate frame and provides a closed form solution to the estimation of all parameters. For a comparative study of performance, we also developed another algorithm based on this special camera setup using epipolar geometry. A number of experiments using synthetic data have shown that the proposed algorithm is generally more accurate and robust than the epipolar geometry based algorithm and that the geometric properties of reflected correspondence vectors provide effective constraints to the calibration of rigid body transformations.

  19. Truncated feature representation for automatic target detection using transformed data-based decomposition

    NASA Astrophysics Data System (ADS)

    Riasati, Vahid R.

    2016-05-01

    In this work, the data covariance matrix is diagonalized to provide an orthogonal bases set using the eigen vectors of the data. The eigen-vector decomposition of the data is transformed and filtered in the transform domain to truncate the data for robust features related to a specified set of targets. These truncated eigen features are then combined and reconstructed to utilize in a composite filter and consequently utilized for the automatic target detection of the same class of targets. The results associated with the testing of the current technique are evaluated using the peak-correlation and peak-correlation energy metrics and are presented in this work. The inverse transformed eigen-bases of the current technique may be thought of as an injected sparsity to minimize data in representing the skeletal data structure information associated with the set of targets under consideration.

  20. The Researching on Evaluation of Automatic Voltage Control Based on Improved Zoning Methodology

    NASA Astrophysics Data System (ADS)

    Xiao-jun, ZHU; Ang, FU; Guang-de, DONG; Rui-miao, WANG; De-fen, ZHU

    2018-03-01

    According to the present serious phenomenon of increasing size and structure of power system, hierarchically structured automatic voltage control(AVC) has been the researching spot. In the paper, the reduced control model is built and the adaptive reduced control model is researched to improve the voltage control effect. The theories of HCSD, HCVS, SKC and FCM are introduced and the effect on coordinated voltage regulation caused by different zoning methodologies is also researched. The generic framework for evaluating performance of coordinated voltage regulation is built. Finally, the IEEE-96 stsyem is used to divide the network. The 2383-bus Polish system is built to verify that the selection of a zoning methodology affects not only the coordinated voltage regulation operation, but also its robustness to erroneous data and proposes a comprehensive generic framework for evaluating its performance. The New England 39-bus network is used to verify the adaptive reduced control models’ performance.

  1. Comprehensive machine learning analysis of Hydra behavior reveals a stable basal behavioral repertoire

    PubMed Central

    Taralova, Ekaterina; Dupre, Christophe; Yuste, Rafael

    2018-01-01

    Animal behavior has been studied for centuries, but few efficient methods are available to automatically identify and classify it. Quantitative behavioral studies have been hindered by the subjective and imprecise nature of human observation, and the slow speed of annotating behavioral data. Here, we developed an automatic behavior analysis pipeline for the cnidarian Hydra vulgaris using machine learning. We imaged freely behaving Hydra, extracted motion and shape features from the videos, and constructed a dictionary of visual features to classify pre-defined behaviors. We also identified unannotated behaviors with unsupervised methods. Using this analysis pipeline, we quantified 6 basic behaviors and found surprisingly similar behavior statistics across animals within the same species, regardless of experimental conditions. Our analysis indicates that the fundamental behavioral repertoire of Hydra is stable. This robustness could reflect a homeostatic neural control of "housekeeping" behaviors which could have been already present in the earliest nervous systems. PMID:29589829

  2. Advances in Domain Connectivity for Overset Grids Using the X-Rays Approach

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Kim, Noah; Pandya, Shishir A.

    2012-01-01

    Advances in automation and robustness of the X-rays approach to domain connectivity for overset grids are presented. Given the surface definition for each component that makes up a complex configuration, the determination of hole points with appropriate hole boundaries is automatically and efficiently performed. Improvements made to the original X-rays approach for identifying the minimum hole include an automated closure scheme for hole-cutters with open boundaries, automatic determination of grid points to be considered for blanking by each hole-cutter, and an adaptive X-ray map to economically handle components in close proximity. Furthermore, an automated spatially varying offset of the hole boundary from the minimum hole is achieved using a dual wall-distance function and an orphan point removal iteration process. Results using the new scheme are presented for a number of static and relative motion test cases on a variety of aerospace applications.

  3. Automatic detection of slight parameter changes associated to complex biomedical signals using multiresolution q-entropy1.

    PubMed

    Torres, M E; Añino, M M; Schlotthauer, G

    2003-12-01

    It is well known that, from a dynamical point of view, sudden variations in physiological parameters which govern certain diseases can cause qualitative changes in the dynamics of the corresponding physiological process. The purpose of this paper is to introduce a technique that allows the automated temporal localization of slight changes in a parameter of the law that governs the nonlinear dynamics of a given signal. This tool takes, from the multiresolution entropies, the ability to show these changes as statistical variations at each scale. These variations are held in the corresponding principal component. Appropriately combining these techniques with a statistical changes detector, a complexity change detection algorithm is obtained. The relevance of the approach, together with its robustness in the presence of moderate noise, is discussed in numerical simulations and the automatic detector is applied to real and simulated biological signals.

  4. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  5. Automatic segmentation of mandible in panoramic x-ray.

    PubMed

    Abdi, Amir Hossein; Kasaei, Shohreh; Mehdizadeh, Mojdeh

    2015-10-01

    As the panoramic x-ray is the most common extraoral radiography in dentistry, segmentation of its anatomical structures facilitates diagnosis and registration of dental records. This study presents a fast and accurate method for automatic segmentation of mandible in panoramic x-rays. In the proposed four-step algorithm, a superior border is extracted through horizontal integral projections. A modified Canny edge detector accompanied by morphological operators extracts the inferior border of the mandible body. The exterior borders of ramuses are extracted through a contour tracing method based on the average model of mandible. The best-matched template is fetched from the atlas of mandibles to complete the contour of left and right processes. The algorithm was tested on a set of 95 panoramic x-rays. Evaluating the results against manual segmentations of three expert dentists showed that the method is robust. It achieved an average performance of [Formula: see text] in Dice similarity, specificity, and sensitivity.

  6. Parametric Human Body Reconstruction Based on Sparse Key Points.

    PubMed

    Cheng, Ke-Li; Tong, Ruo-Feng; Tang, Min; Qian, Jing-Ye; Sarkis, Michel

    2016-11-01

    We propose an automatic parametric human body reconstruction algorithm which can efficiently construct a model using a single Kinect sensor. A user needs to stand still in front of the sensor for a couple of seconds to measure the range data. The user's body shape and pose will then be automatically constructed in several seconds. Traditional methods optimize dense correspondences between range data and meshes. In contrast, our proposed scheme relies on sparse key points for the reconstruction. It employs regression to find the corresponding key points between the scanned range data and some annotated training data. We design two kinds of feature descriptors as well as corresponding regression stages to make the regression robust and accurate. Our scheme follows with dense refinement where a pre-factorization method is applied to improve the computational efficiency. Compared with other methods, our scheme achieves similar reconstruction accuracy but significantly reduces runtime.

  7. Global quasi-linearization (GQL) versus QSSA for a hydrogen-air auto-ignition problem.

    PubMed

    Yu, Chunkan; Bykov, Viatcheslav; Maas, Ulrich

    2018-04-25

    A recently developed automatic reduction method for systems of chemical kinetics, the so-called Global Quasi-Linearization (GQL) method, has been implemented to study and reduce the dimensions of a homogeneous combustion system. The results of application of the GQL and the Quasi-Steady State Assumption (QSSA) are compared. A number of drawbacks of the QSSA are discussed, i.e. the selection criteria of QSS-species and its sensitivity to system parameters, initial conditions, etc. To overcome these drawbacks, the GQL approach has been developed as a robust, automatic and scaling invariant method for a global analysis of the system timescale hierarchy and subsequent model reduction. In this work the auto-ignition problem of the hydrogen-air system is considered in a wide range of system parameters and initial conditions. The potential of the suggested approach to overcome most of the drawbacks of the standard approaches is illustrated.

  8. Sequential visibility-graph motifs

    NASA Astrophysics Data System (ADS)

    Iacovacci, Jacopo; Lacasa, Lucas

    2016-04-01

    Visibility algorithms transform time series into graphs and encode dynamical information in their topology, paving the way for graph-theoretical time series analysis as well as building a bridge between nonlinear dynamics and network science. In this work we introduce and study the concept of sequential visibility-graph motifs, smaller substructures of n consecutive nodes that appear with characteristic frequencies. We develop a theory to compute in an exact way the motif profiles associated with general classes of deterministic and stochastic dynamics. We find that this simple property is indeed a highly informative and computationally efficient feature capable of distinguishing among different dynamics and robust against noise contamination. We finally confirm that it can be used in practice to perform unsupervised learning, by extracting motif profiles from experimental heart-rate series and being able, accordingly, to disentangle meditative from other relaxation states. Applications of this general theory include the automatic classification and description of physical, biological, and financial time series.

  9. An integrative approach for measuring semantic similarities using gene ontology.

    PubMed

    Peng, Jiajie; Li, Hongxiang; Jiang, Qinghua; Wang, Yadong; Chen, Jin

    2014-01-01

    Gene Ontology (GO) provides rich information and a convenient way to study gene functional similarity, which has been successfully used in various applications. However, the existing GO based similarity measurements have limited functions for only a subset of GO information is considered in each measure. An appropriate integration of the existing measures to take into account more information in GO is demanding. We propose a novel integrative measure called InteGO2 to automatically select appropriate seed measures and then to integrate them using a metaheuristic search method. The experiment results show that InteGO2 significantly improves the performance of gene similarity in human, Arabidopsis and yeast on both molecular function and biological process GO categories. InteGO2 computes gene-to-gene similarities more accurately than tested existing measures and has high robustness. The supplementary document and software are available at http://mlg.hit.edu.cn:8082/.

  10. Integrated tools for control-system analysis

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.; Proffitt, Melissa S.; Clark, David R.

    1989-01-01

    The basic functions embedded within a user friendly software package (MATRIXx) are used to provide a high level systems approach to the analysis of linear control systems. Various control system analysis configurations are assembled automatically to minimize the amount of work by the user. Interactive decision making is incorporated via menu options and at selected points, such as in the plotting section, by inputting data. There are five evaluations such as the singular value robustness test, singular value loop transfer frequency response, Bode frequency response, steady-state covariance analysis, and closed-loop eigenvalues. Another section describes time response simulations. A time response for random white noise disturbance is available. The configurations and key equations used for each type of analysis, the restrictions that apply, the type of data required, and an example problem are described. One approach for integrating the design and analysis tools is also presented.

  11. Illumina GA IIx& HiSeq 2000 Production Sequenccing and QC Analysis Pipelines at the DOE Joint Genome Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daum, Christopher; Zane, Matthew; Han, James

    2011-01-31

    The U.S. Department of Energy (DOE) Joint Genome Institute's (JGI) Production Sequencing group is committed to the generation of high-quality genomic DNA sequence to support the mission areas of renewable energy generation, global carbon management, and environmental characterization and clean-up. Within the JGI's Production Sequencing group, a robust Illumina Genome Analyzer and HiSeq pipeline has been established. Optimization of the sesequencer pipelines has been ongoing with the aim of continual process improvement of the laboratory workflow, reducing operational costs and project cycle times to increases ample throughput, and improving the overall quality of the sequence generated. A sequence QC analysismore » pipeline has been implemented to automatically generate read and assembly level quality metrics. The foremost of these optimization projects, along with sequencing and operational strategies, throughput numbers, and sequencing quality results will be presented.« less

  12. Longitudinal Analysis of New Information Types in Clinical Notes

    PubMed Central

    Zhang, Rui; Pakhomov, Serguei; Melton, Genevieve B.

    2014-01-01

    It is increasingly recognized that redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous, significant, and may negatively impact the secondary use of these notes for research and patient care. We investigated several automated methods to identify redundant versus relevant new information in clinical reports. These methods may provide a valuable approach to extract clinically pertinent information and further improve the accuracy of clinical information extraction systems. In this study, we used UMLS semantic types to extract several types of new information, including problems, medications, and laboratory information. Automatically identified new information highly correlated with manual reference standard annotations. Methods to identify different types of new information can potentially help to build up more robust information extraction systems for clinical researchers as well as aid clinicians and researchers in navigating clinical notes more effectively and quickly identify information pertaining to changes in health states. PMID:25717418

  13. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  14. Responsible healthcare innovation: anticipatory governance of nanodiagnostics for theranostics medicine.

    PubMed

    Fisher, Erik; Boenink, Marianne; van der Burg, Simone; Woodbury, Neal

    2012-11-01

    Theranostics signals the integrated application of molecular diagnostics, therapeutic treatment and patient response monitoring. Such integration has hitherto neglected another crucial dimension: coproduction of theranostic scientific knowledge, novel technological development and broader sociopolitical systems whose boundaries are highly porous. Nanodiagnostics applications to theranostics are one of the most contested and potentially volatile postgenomics innovation trajectories as they build on past and current tensions and promises surrounding both nanotechnology and personalized medicine. Recent science policy research suggests that beneficial outcomes of innovations do not simply flow from the generation of scientific knowledge and technological capability in a linear or automatic fashion. Thus, attempts to offset public concerns about controversial emerging technologies by expert risk assurances can be unproductive. Anticipation provides a more robust basis for governance that supports genuine healthcare progress. This article presents a synthesis of novel policy approaches that directly inform theranostics medicine and the future(s) of postgenomics healthcare.

  15. Ship detection in optical remote sensing images based on deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Zhao, Danpei; Cai, Bowen

    2017-10-01

    Automatic ship detection in optical remote sensing images has attracted wide attention for its broad applications. Major challenges for this task include the interference of cloud, wave, wake, and the high computational expenses. We propose a fast and robust ship detection algorithm to solve these issues. The framework for ship detection is designed based on deep convolutional neural networks (CNNs), which provide the accurate locations of ship targets in an efficient way. First, the deep CNN is designed to extract features. Then, a region proposal network (RPN) is applied to discriminate ship targets and regress the detection bounding boxes, in which the anchors are designed by intrinsic shape of ship targets. Experimental results on numerous panchromatic images demonstrate that, in comparison with other state-of-the-art ship detection methods, our method is more efficient and achieves higher detection accuracy and more precise bounding boxes in different complex backgrounds.

  16. Offline Signature Verification Using the Discrete Radon Transform and a Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Coetzer, J.; Herbst, B. M.; du Preez, J. A.

    2004-12-01

    We developed a system that automatically authenticates offline handwritten signatures using the discrete Radon transform (DRT) and a hidden Markov model (HMM). Given the robustness of our algorithm and the fact that only global features are considered, satisfactory results are obtained. Using a database of 924 signatures from 22 writers, our system achieves an equal error rate (EER) of 18% when only high-quality forgeries (skilled forgeries) are considered and an EER of 4.5% in the case of only casual forgeries. These signatures were originally captured offline. Using another database of 4800 signatures from 51 writers, our system achieves an EER of 12.2% when only skilled forgeries are considered. These signatures were originally captured online and then digitally converted into static signature images. These results compare well with the results of other algorithms that consider only global features.

  17. A neural-based remote eye gaze tracker under natural head motion.

    PubMed

    Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso

    2008-10-01

    A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.

  18. Simulation of axisymmetric jets with a finite element Navier-Stokes solver and a multilevel VOF approach

    NASA Astrophysics Data System (ADS)

    Cervone, A.; Manservisi, S.; Scardovelli, R.

    2010-09-01

    A multilevel VOF approach has been coupled to an accurate finite element Navier-Stokes solver in axisymmetric geometry for the simulation of incompressible liquid jets with high density ratios. The representation of the color function over a fine grid has been introduced to reduce the discontinuity of the interface at the cell boundary. In the refined grid the automatic breakup and coalescence occur at a spatial scale much smaller than the coarse grid spacing. To reduce memory requirements, we have implemented on the fine grid a compact storage scheme which memorizes the color function data only in the mixed cells. The capillary force is computed by using the Laplace-Beltrami operator and a volumetric approach for the two principal curvatures. Several simulations of axisymmetric jets have been performed to show the accuracy and robustness of the proposed scheme.

  19. Fault-tolerant Control of a Cyber-physical System

    NASA Astrophysics Data System (ADS)

    Roxana, Rusu-Both; Eva-Henrietta, Dulf

    2017-10-01

    Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.

  20. An extended Lagrangian method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1992-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. Unlike the Lagrangian method previously imposed which is valid only for supersonic flows, the present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multi-dimensional discontinuities with a high level of accuracy, similar to that found in one-dimensional problems.

Top