Sample records for fully automatic approach

  1. A fully automatic three-step liver segmentation method on LDA-based probability maps for multiple contrast MR images.

    PubMed

    Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf

    2010-07-01

    Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.

  2. Formal Specification and Automatic Analysis of Business Processes under Authorization Constraints: An Action-Based Approach

    NASA Astrophysics Data System (ADS)

    Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa

    We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.

  3. Fully automatic detection and visualization of patient specific coronary supply regions

    NASA Astrophysics Data System (ADS)

    Fritz, Dominik; Wiedemann, Alexander; Dillmann, Ruediger; Scheuering, Michael

    2008-03-01

    Coronary territory maps, which associate myocardial regions with the corresponding coronary artery that supply them, are a common visualization technique to assist the physician in the diagnosis of coronary artery disease. However, the commonly used visualization is based on the AHA-17-segment model, which is an empirical population based model. Therefore, it does not necessarily cope with the often highly individual coronary anatomy of a specific patient. In this paper we introduce a novel fully automatic approach to compute the patient individual coronary supply regions in CTA datasets. This approach is divided in three consecutive steps. First, the aorta is fully automatically located in the dataset with a combination of a Hough transform and a cylindrical model matching approach. Having the location of the aorta, a segmentation and skeletonization of the coronary tree is triggered. In the next step, the three main branches (LAD, LCX and RCX) are automatically labeled, based on the knowledge of the pose of the aorta and the left ventricle. In the last step the labeled coronary tree is projected on the left ventricular surface, which can afterward be subdivided into the coronary supply regions, based on a Voronoi transform. The resulting supply regions can be either shown in 3D on the epicardiac surface of the left ventricle, or as a subdivision of a polarmap.

  4. Quantification of regional fat volume in rat MRI

    NASA Astrophysics Data System (ADS)

    Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren

    2003-05-01

    Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.

  5. Fully automatic region of interest selection in glomerular filtration rate estimation from 99mTc-DTPA renogram.

    PubMed

    Lin, Kun-Ju; Huang, Jia-Yann; Chen, Yung-Sheng

    2011-12-01

    Glomerular filtration rate (GFR) is a common accepted standard estimation of renal function. Gamma camera-based methods for estimating renal uptake of (99m)Tc-diethylenetriaminepentaacetic acid (DTPA) without blood or urine sampling have been widely used. Of these, the method introduced by Gates has been the most common method. Currently, most of gamma cameras are equipped with a commercial program for GFR determination, a semi-quantitative analysis by manually drawing region of interest (ROI) over each kidney. Then, the GFR value can be computed from the scintigraphic determination of (99m)Tc-DTPA uptake within the kidney automatically. Delineating the kidney area is difficult when applying a fixed threshold value. Moreover, hand-drawn ROIs are tedious, time consuming, and dependent highly on operator skill. Thus, we developed a fully automatic renal ROI estimation system based on the temporal changes in intensity counts, intensity-pair distribution image contrast enhancement method, adaptive thresholding, and morphological operations that can locate the kidney area and obtain the GFR value from a (99m)Tc-DTPA renogram. To evaluate the performance of the proposed approach, 30 clinical dynamic renograms were introduced. The fully automatic approach failed in one patient with very poor renal function. Four patients had a unilateral kidney, and the others had bilateral kidneys. The automatic contours from the remaining 54 kidneys were compared with the contours of manual drawing. The 54 kidneys were included for area error and boundary error analyses. There was high correlation between two physicians' manual contours and the contours obtained by our approach. For area error analysis, the mean true positive area overlap is 91%, the mean false negative is 13.4%, and the mean false positive is 9.3%. The boundary error is 1.6 pixels. The GFR calculated using this automatic computer-aided approach is reproducible and may be applied to help nuclear medicine physicians in clinical practice.

  6. Automatic testing and assessment of neuroanatomy using a digital brain atlas: method and development of computer- and mobile-based applications.

    PubMed

    Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar

    2009-10-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints.

  7. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    PubMed

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Machine-Aided Translation: From Terminology Banks to Interactive Translation Systems.

    ERIC Educational Resources Information Center

    Greenfield, Concetta C.; Serain, Daniel

    The rapid growth of the need for technical translations in recent years has led specialists to utilize computer technology to improve the efficiency and quality of translation. The two approaches considered were automatic translation and terminology banks. Since the results of fully automatic translation were considered unsatisfactory by various…

  9. A Flexible and Configurable Architecture for Automatic Control Remote Laboratories

    ERIC Educational Resources Information Center

    Kalúz, Martin; García-Zubía, Javier; Fikar, Miroslav; Cirka, Luboš

    2015-01-01

    In this paper, we propose a novel approach in hardware and software architecture design for implementation of remote laboratories for automatic control. In our contribution, we show the solution with flexible connectivity at back-end, providing features of multipurpose usage with different types of experimental devices, and fully configurable…

  10. ARES v2: new features and improved performance

    NASA Astrophysics Data System (ADS)

    Sousa, S. G.; Santos, N. C.; Adibekyan, V.; Delgado-Mena, E.; Israelian, G.

    2015-05-01

    Aims: We present a new upgraded version of ARES. The new version includes a series of interesting new features such as automatic radial velocity correction, a fully automatic continuum determination, and an estimation of the errors for the equivalent widths. Methods: The automatic correction of the radial velocity is achieved with a simple cross-correlation function, and the automatic continuum determination, as well as the estimation of the errors, relies on a new approach to evaluating the spectral noise at the continuum level. Results: ARES v2 is totally compatible with its predecessor. We show that the fully automatic continuum determination is consistent with the previous methods applied for this task. It also presents a significant improvement on its performance thanks to the implementation of a parallel computation using the OpenMP library. Automatic Routine for line Equivalent widths in stellar Spectra - ARES webpage: http://www.astro.up.pt/~sousasag/ares/Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 075.D-0800(A).

  11. Automatic and semi-automatic approaches for arteriolar-to-venular computation in retinal photographs

    NASA Astrophysics Data System (ADS)

    Mendonça, Ana Maria; Remeseiro, Beatriz; Dashtbozorg, Behdad; Campilho, Aurélio

    2017-03-01

    The Arteriolar-to-Venular Ratio (AVR) is a popular dimensionless measure which allows the assessment of patients' condition for the early diagnosis of different diseases, including hypertension and diabetic retinopathy. This paper presents two new approaches for AVR computation in retinal photographs which include a sequence of automated processing steps: vessel segmentation, caliber measurement, optic disc segmentation, artery/vein classification, region of interest delineation, and AVR calculation. Both approaches have been tested on the INSPIRE-AVR dataset, and compared with a ground-truth provided by two medical specialists. The obtained results demonstrate the reliability of the fully automatic approach which provides AVR ratios very similar to at least one of the observers. Furthermore, the semi-automatic approach, which includes the manual modification of the artery/vein classification if needed, allows to significantly reduce the error to a level below the human error.

  12. Automatic Exposure Control Device for Digital Mammography

    DTIC Science & Technology

    2001-08-01

    developing innovative approaches for controlling DM exposures. These approaches entail using the digital detector and an artificial neural network to...of interest that determine the exposure parameters for the fully exposed image; and (2) to use an artificial neural network to select exposure

  13. Automatic Exposure Control Device for Digital Mammography

    DTIC Science & Technology

    2004-08-01

    developing innovative approaches for controlling DM exposures. These approaches entail using the digital detector and an artificial neural network to...of interest that determine the exposure parameters for the fully exposed image; and (2) to use an artificial neural network to select exposure

  14. Fully automated tumor segmentation based on improved fuzzy connectedness algorithm in brain MR images.

    PubMed

    Harati, Vida; Khayati, Rasoul; Farzan, Abdolreza

    2011-07-01

    Uncontrollable and unlimited cell growth leads to tumor genesis in the brain. If brain tumors are not diagnosed early and cured properly, they could cause permanent brain damage or even death to patients. As in all methods of treatments, any information about tumor position and size is important for successful treatment; hence, finding an accurate and a fully automated method to give information to physicians is necessary. A fully automatic and accurate method for tumor region detection and segmentation in brain magnetic resonance (MR) images is suggested. The presented approach is an improved fuzzy connectedness (FC) algorithm based on a scale in which the seed point is selected automatically. This algorithm is independent of the tumor type in terms of its pixels intensity. Tumor segmentation evaluation results based on similarity criteria (similarity index (SI), overlap fraction (OF), and extra fraction (EF) are 92.89%, 91.75%, and 3.95%, respectively) indicate a higher performance of the proposed approach compared to the conventional methods, especially in MR images, in tumor regions with low contrast. Thus, the suggested method is useful for increasing the ability of automatic estimation of tumor size and position in brain tissues, which provides more accurate investigation of the required surgery, chemotherapy, and radiotherapy procedures. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Fully automatic segmentation of femurs with medullary canal definition in high and in low resolution CT scans.

    PubMed

    Almeida, Diogo F; Ruben, Rui B; Folgado, João; Fernandes, Paulo R; Audenaert, Emmanuel; Verhegghe, Benedict; De Beule, Matthieu

    2016-12-01

    Femur segmentation can be an important tool in orthopedic surgical planning. However, in order to overcome the need of an experienced user with extensive knowledge on the techniques, segmentation should be fully automatic. In this paper a new fully automatic femur segmentation method for CT images is presented. This method is also able to define automatically the medullary canal and performs well even in low resolution CT scans. Fully automatic femoral segmentation was performed adapting a template mesh of the femoral volume to medical images. In order to achieve this, an adaptation of the active shape model (ASM) technique based on the statistical shape model (SSM) and local appearance model (LAM) of the femur with a novel initialization method was used, to drive the template mesh deformation in order to fit the in-image femoral shape in a time effective approach. With the proposed method a 98% convergence rate was achieved. For high resolution CT images group the average error is less than 1mm. For the low resolution image group the results are also accurate and the average error is less than 1.5mm. The proposed segmentation pipeline is accurate, robust and completely user free. The method is robust to patient orientation, image artifacts and poorly defined edges. The results excelled even in CT images with a significant slice thickness, i.e., above 5mm. Medullary canal segmentation increases the geometric information that can be used in orthopedic surgical planning or in finite element analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Automatic classification of seismic events within a regional seismograph network

    NASA Astrophysics Data System (ADS)

    Tiira, Timo; Kortström, Jari; Uski, Marja

    2015-04-01

    A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.

  17. Automated contour detection in X-ray left ventricular angiograms using multiview active appearance models and dynamic programming.

    PubMed

    Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2006-09-01

    This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.

  18. Fully automatic left ventricular myocardial strain estimation in 2D short-axis tagged magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Morais, Pedro; Queirós, Sandro; Heyde, Brecht; Engvall, Jan; 'hooge, Jan D.; Vilaça, João L.

    2017-09-01

    Cardiovascular diseases are among the leading causes of death and frequently result in local myocardial dysfunction. Among the numerous imaging modalities available to detect these dysfunctional regions, cardiac deformation imaging through tagged magnetic resonance imaging (t-MRI) has been an attractive approach. Nevertheless, fully automatic analysis of these data sets is still challenging. In this work, we present a fully automatic framework to estimate left ventricular myocardial deformation from t-MRI. This strategy performs automatic myocardial segmentation based on B-spline explicit active surfaces, which are initialized using an annular model. A non-rigid image-registration technique is then used to assess myocardial deformation. Three experiments were set up to validate the proposed framework using a clinical database of 75 patients. First, automatic segmentation accuracy was evaluated by comparing against manual delineations at one specific cardiac phase. The proposed solution showed an average perpendicular distance error of 2.35  ±  1.21 mm and 2.27  ±  1.02 mm for the endo- and epicardium, respectively. Second, starting from either manual or automatic segmentation, myocardial tracking was performed and the resulting strain curves were compared. It is shown that the automatic segmentation adds negligible differences during the strain-estimation stage, corroborating its accuracy. Finally, segmental strain was compared with scar tissue extent determined by delay-enhanced MRI. The results proved that both strain components were able to distinguish between normal and infarct regions. Overall, the proposed framework was shown to be accurate, robust, and attractive for clinical practice, as it overcomes several limitations of a manual analysis.

  19. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  20. Automatic bladder segmentation from CT images using deep CNN and 3D fully connected CRF-RNN.

    PubMed

    Xu, Xuanang; Zhou, Fugen; Liu, Bo

    2018-03-19

    Automatic approach for bladder segmentation from computed tomography (CT) images is highly desirable in clinical practice. It is a challenging task since the bladder usually suffers large variations of appearance and low soft-tissue contrast in CT images. In this study, we present a deep learning-based approach which involves a convolutional neural network (CNN) and a 3D fully connected conditional random fields recurrent neural network (CRF-RNN) to perform accurate bladder segmentation. We also propose a novel preprocessing method, called dual-channel preprocessing, to further advance the segmentation performance of our approach. The presented approach works as following: first, we apply our proposed preprocessing method on the input CT image and obtain a dual-channel image which consists of the CT image and an enhanced bladder density map. Second, we exploit a CNN to predict a coarse voxel-wise bladder score map on this dual-channel image. Finally, a 3D fully connected CRF-RNN refines the coarse bladder score map and produce final fine-localized segmentation result. We compare our approach to the state-of-the-art V-net on a clinical dataset. Results show that our approach achieves superior segmentation accuracy, outperforming the V-net by a significant margin. The Dice Similarity Coefficient of our approach (92.24%) is 8.12% higher than that of the V-net. Moreover, the bladder probability maps performed by our approach present sharper boundaries and more accurate localizations compared with that of the V-net. Our approach achieves higher segmentation accuracy than the state-of-the-art method on clinical data. Both the dual-channel processing and the 3D fully connected CRF-RNN contribute to this improvement. The united deep network composed of the CNN and 3D CRF-RNN also outperforms a system where the CRF model acts as a post-processing method disconnected from the CNN.

  1. Comparison between manual and semi-automatic segmentation of nasal cavity and paranasal sinuses from CT images.

    PubMed

    Tingelhoff, K; Moral, A I; Kunkel, M E; Rilk, M; Wagner, I; Eichhorn, K G; Wahl, F M; Bootz, F

    2007-01-01

    Segmentation of medical image data is getting more and more important over the last years. The results are used for diagnosis, surgical planning or workspace definition of robot-assisted systems. The purpose of this paper is to find out whether manual or semi-automatic segmentation is adequate for ENT surgical workflow or whether fully automatic segmentation of paranasal sinuses and nasal cavity is needed. We present a comparison of manual and semi-automatic segmentation of paranasal sinuses and the nasal cavity. Manual segmentation is performed by custom software whereas semi-automatic segmentation is realized by a commercial product (Amira). For this study we used a CT dataset of the paranasal sinuses which consists of 98 transversal slices, each 1.0 mm thick, with a resolution of 512 x 512 pixels. For the analysis of both segmentation procedures we used volume, extension (width, length and height), segmentation time and 3D-reconstruction. The segmentation time was reduced from 960 minutes with manual to 215 minutes with semi-automatic segmentation. We found highest variances segmenting nasal cavity. For the paranasal sinuses manual and semi-automatic volume differences are not significant. Dependent on the segmentation accuracy both approaches deliver useful results and could be used for e.g. robot-assisted systems. Nevertheless both procedures are not useful for everyday surgical workflow, because they take too much time. Fully automatic and reproducible segmentation algorithms are needed for segmentation of paranasal sinuses and nasal cavity.

  2. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  3. A Modular Hierarchical Approach to 3D Electron Microscopy Image Segmentation

    PubMed Central

    Liu, Ting; Jones, Cory; Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2014-01-01

    The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy. PMID:24491638

  4. A hybrid 3D region growing and 4D curvature analysis-based automatic abdominal blood vessel segmentation through contrast enhanced CT

    NASA Astrophysics Data System (ADS)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen

    2017-03-01

    In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.

  5. Advances of FishNet towards a fully automatic monitoring system for fish migration

    NASA Astrophysics Data System (ADS)

    Kratzert, Frederik; Mader, Helmut

    2017-04-01

    Restoring the continuum of river networks, affected by anthropogenic constructions, is one of the main objectives of the Water Framework Directive. Regarding fish migration, fish passes are a widely used measure. Often the functionality of these fish passes needs to be assessed by monitoring. Over the last years, we developed a new semi-automatic monitoring system (FishCam) which allows the contact free observation of fish migration in fish passes through videos. The system consists of a detection tunnel, equipped with a camera, a motion sensor and artificial light sources, as well as a software (FishNet), which helps to analyze the video data. In its latest version, the software is capable of detecting and tracking objects in the videos as well as classifying them into "fish" and "no-fish" objects. This allows filtering out the videos containing at least one fish (approx. 5 % of all grabbed videos) and reduces the manual labor to the analysis of these videos. In this state the entire system has already been used in over 20 different fish passes across Austria for a total of over 140 months of monitoring resulting in more than 1.4 million analyzed videos. As a next step towards a fully automatic monitoring system, a key feature is the automatized classification of the detected fish into their species, which is still an unsolved task in a fully automatic monitoring environment. Recent advances in the field of machine learning, especially image classification with deep convolutional neural networks, sound promising in order to solve this problem. In this study, different approaches for the fish species classification are tested. Besides an image-only based classification approach using deep convolutional neural networks, various methods that combine the power of convolutional neural networks as image descriptors with additional features, such as the fish length and the time of appearance, are explored. To facilitate the development and testing phase of this approach, a subset of six fish species of Austrian rivers and streams is considered in this study. All scripts and the data to reproduce the results of this study will be made publicly available on GitHub* at the beginning of the EGU2017 General Assembly. * https://github.com/kratzert/EGU2017_public/

  6. Model-based position correlation between breast images

    NASA Astrophysics Data System (ADS)

    Georgii, J.; Zöhrer, F.; Hahn, H. K.

    2013-02-01

    Nowadays, breast diagnosis is based on images of different projections and modalities, such that sensitivity and specificity of the diagnosis can be improved. However, this emburdens radiologists to find corresponding locations in these data sets, which is a time consuming task, especially since the resolution of the images increases and thus more and more data have to be considered in the diagnosis. Therefore, we aim at support radiologist by automatically synchronizing cursor positions between different views of the breast. Specifically, we present an automatic approach to compute the spatial correlation between MLO and CC mammogram or tomosynthesis projections of the breast. It is based on pre-computed finite element simulations of generic breast models, which are adapted to the patient-specific breast using a contour mapping approach. Our approach is designed to be fully automatic and efficient, such that it can be implemented directly into existing multimodal breast workstations. Additionally, it is extendable to support other breast modalities in future, too.

  7. Using Machine Learning to Increase Research Efficiency: A New Approach in Environmental Sciences

    USDA-ARS?s Scientific Manuscript database

    Data collection has evolved from tedious in-person fieldwork to automatic data gathering from multiple sensor remotely. Scientist in environmental sciences have not fully exploited this data deluge, including legacy and new data, because the traditional scientific method is focused on small, high qu...

  8. Automatic multi-organ segmentation using learning-based segmentation and level set optimization.

    PubMed

    Kohlberger, Timo; Sofka, Michal; Zhang, Jingdan; Birkbeck, Neil; Wetzl, Jens; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin

    2011-01-01

    We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.

  9. Efficient content-based low-altitude images correlated network and strips reconstruction

    NASA Astrophysics Data System (ADS)

    He, Haiqing; You, Qi; Chen, Xiaoyong

    2017-01-01

    The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.

  10. Research and Development of Fully Automatic Alien Smoke Stack and Packaging System

    NASA Astrophysics Data System (ADS)

    Yang, Xudong; Ge, Qingkuan; Peng, Tao; Zuo, Ping; Dong, Weifu

    2017-12-01

    The problem of low efficiency of manual sorting packaging for the current tobacco distribution center, which developed a set of safe efficient and automatic type of alien smoke stack and packaging system. The functions of fully automatic alien smoke stack and packaging system adopt PLC control technology, servo control technology, robot technology, image recognition technology and human-computer interaction technology. The characteristics, principles, control process and key technology of the system are discussed in detail. Through the installation and commissioning fully automatic alien smoke stack and packaging system has a good performance and has completed the requirements for shaped cigarette.

  11. Fully automatic oil spill detection from COSMO-SkyMed imagery using a neural network approach

    NASA Astrophysics Data System (ADS)

    Avezzano, Ruggero G.; Del Frate, Fabio; Latini, Daniele

    2012-09-01

    The increased amount of available Synthetic Aperture Radar (SAR) images acquired over the ocean represents an extraordinary potential for improving oil spill detection activities. On the other side this involves a growing workload on the operators at analysis centers. In addition, even if the operators go through extensive training to learn manual oil spill detection, they can provide different and subjective responses. Hence, the upgrade and improvements of algorithms for automatic detection that can help in screening the images and prioritizing the alarms are of great benefit. In the framework of an ASI Announcement of Opportunity for the exploitation of COSMO-SkyMed data, a research activity (ASI contract L/020/09/0) aiming at studying the possibility to use neural networks architectures to set up fully automatic processing chains using COSMO-SkyMed imagery has been carried out and results are presented in this paper. The automatic identification of an oil spill is seen as a three step process based on segmentation, feature extraction and classification. We observed that a PCNN (Pulse Coupled Neural Network) was capable of providing a satisfactory performance in the different dark spots extraction, close to what it would be produced by manual editing. For the classification task a Multi-Layer Perceptron (MLP) Neural Network was employed.

  12. Comparison of Landsat-8, ASTER and Sentinel 1 satellite remote sensing data in automatic lineaments extraction: A case study of Sidi Flah-Bouskour inlier, Moroccan Anti Atlas

    NASA Astrophysics Data System (ADS)

    Adiri, Zakaria; El Harti, Abderrazak; Jellouli, Amine; Lhissou, Rachid; Maacha, Lhou; Azmi, Mohamed; Zouhair, Mohamed; Bachaoui, El Mostafa

    2017-12-01

    Certainly, lineament mapping occupies an important place in several studies, including geology, hydrogeology and topography etc. With the help of remote sensing techniques, lineaments can be better identified due to strong advances in used data and methods. This allowed exceeding the usual classical procedures and achieving more precise results. The aim of this work is the comparison of ASTER, Landsat-8 and Sentinel 1 data sensors in automatic lineament extraction. In addition to image data, the followed approach includes the use of the pre-existing geological map, the Digital Elevation Model (DEM) as well as the ground truth. Through a fully automatic approach consisting of a combination of edge detection algorithm and line-linking algorithm, we have found the optimal parameters for automatic lineament extraction in the study area. Thereafter, the comparison and the validation of the obtained results showed that the Sentinel 1 data are more efficient in restitution of lineaments. This indicates the performance of the radar data compared to those optical in this kind of study.

  13. Algorithms for the automatic generation of 2-D structured multi-block grids

    NASA Technical Reports Server (NTRS)

    Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.

    1995-01-01

    Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.

  14. Computer-aided endovascular aortic repair using fully automated two- and three-dimensional fusion imaging.

    PubMed

    Panuccio, Giuseppe; Torsello, Giovanni Federico; Pfister, Markus; Bisdas, Theodosios; Bosiers, Michel J; Torsello, Giovanni; Austermann, Martin

    2016-12-01

    To assess the usability of a fully automated fusion imaging engine prototype, matching preinterventional computed tomography with intraoperative fluoroscopic angiography during endovascular aortic repair. From June 2014 to February 2015, all patients treated electively for abdominal and thoracoabdominal aneurysms were enrolled prospectively. Before each procedure, preoperative planning was performed with a fully automated fusion engine prototype based on computed tomography angiography, creating a mesh model of the aorta. In a second step, this three-dimensional dataset was registered with the two-dimensional intraoperative fluoroscopy. The main outcome measure was the applicability of the fully automated fusion engine. Secondary outcomes were freedom from failure of automatic segmentation or of the automatic registration as well as accuracy of the mesh model, measuring deviations from intraoperative angiography in millimeters, if applicable. Twenty-five patients were enrolled in this study. The fusion imaging engine could be used in successfully 92% of the cases (n = 23). Freedom from failure of automatic segmentation was 44% (n = 11). The freedom from failure of the automatic registration was 76% (n = 19), the median error of the automatic registration process was 0 mm (interquartile range, 0-5 mm). The fully automated fusion imaging engine was found to be applicable in most cases, albeit in several cases a fully automated data processing was not possible, requiring manual intervention. The accuracy of the automatic registration yielded excellent results and promises a useful and simple to use technology. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  15. Automatic inference of multicellular regulatory networks using informative priors.

    PubMed

    Sun, Xiaoyun; Hong, Pengyu

    2009-01-01

    To fully understand the mechanisms governing animal development, computational models and algorithms are needed to enable quantitative studies of the underlying regulatory networks. We developed a mathematical model based on dynamic Bayesian networks to model multicellular regulatory networks that govern cell differentiation processes. A machine-learning method was developed to automatically infer such a model from heterogeneous data. We show that the model inference procedure can be greatly improved by incorporating interaction data across species. The proposed approach was applied to C. elegans vulval induction to reconstruct a model capable of simulating C. elegans vulval induction under 73 different genetic conditions.

  16. The ACODEA Framework: Developing Segmentation and Classification Schemes for Fully Automatic Analysis of Online Discussions

    ERIC Educational Resources Information Center

    Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank

    2012-01-01

    Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,…

  17. Pipeline Reduction of Binary Light Curves from Large-Scale Surveys

    NASA Astrophysics Data System (ADS)

    Prša, Andrej; Zwitter, Tomaž

    2007-08-01

    One of the most important changes in observational astronomy of the 21st Century is a rapid shift from classical object-by-object observations to extensive automatic surveys. As CCD detectors are getting better and their prices are getting lower, more and more small and medium-size observatories are refocusing their attention to detection of stellar variability through systematic sky-scanning missions. This trend is additionally powered by the success of pioneering surveys such as ASAS, DENIS, OGLE, TASS, their space counterpart Hipparcos and others. Such surveys produce massive amounts of data and it is not at all clear how these data are to be reduced and analysed. This is especially striking in the eclipsing binary (EB) field, where most frequently used tools are optimized for object-by-object analysis. A clear need for thorough, reliable and fully automated approaches to modeling and analysis of EB data is thus obvious. This task is very difficult because of limited data quality, non-uniform phase coverage and parameter degeneracy. The talk will review recent advancements in putting together semi-automatic and fully automatic pipelines for EB data processing. Automatic procedures have already been used to process the Hipparcos data, LMC/SMC observations, OGLE and ASAS catalogs etc. We shall discuss the advantages and shortcomings of these procedures and overview the current status of automatic EB modeling pipelines for the upcoming missions such as CoRoT, Kepler, Gaia and others.

  18. A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.

    PubMed

    Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2015-12-01

    Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Automatic detection of spiculation of pulmonary nodules in computed tomography images

    NASA Astrophysics Data System (ADS)

    Ciompi, F.; Jacobs, C.; Scholten, E. T.; van Riel, S. J.; W. Wille, M. M.; Prokop, M.; van Ginneken, B.

    2015-03-01

    We present a fully automatic method for the assessment of spiculation of pulmonary nodules in low-dose Computed Tomography (CT) images. Spiculation is considered as one of the indicators of nodule malignancy and an important feature to assess in order to decide on a patient-tailored follow-up procedure. For this reason, lung cancer screening scenario would benefit from the presence of a fully automatic system for the assessment of spiculation. The presented framework relies on the fact that spiculated nodules mainly differ from non-spiculated ones in their morphology. In order to discriminate the two categories, information on morphology is captured by sampling intensity profiles along circular patterns on spherical surfaces centered on the nodule, in a multi-scale fashion. Each intensity profile is interpreted as a periodic signal, where the Fourier transform is applied, obtaining a spectrum. A library of spectra is created by clustering data via unsupervised learning. The centroids of the clusters are used to label back each spectrum in the sampling pattern. A compact descriptor encoding the nodule morphology is obtained as the histogram of labels along all the spherical surfaces and used to classify spiculated nodules via supervised learning. We tested our approach on a set of nodules from the Danish Lung Cancer Screening Trial (DLCST) dataset. Our results show that the proposed method outperforms other 3-D descriptors of morphology in the automatic assessment of spiculation.

  20. An Approach for Automatic Generation of Adaptive Hypermedia in Education with Multilingual Knowledge Discovery Techniques

    ERIC Educational Resources Information Center

    Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana

    2007-01-01

    This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…

  1. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    NASA Astrophysics Data System (ADS)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  2. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data.

    PubMed

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-21

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  3. Automatic segmentation and quantification of the cardiac structures from non-contrast-enhanced cardiac CT scans

    NASA Astrophysics Data System (ADS)

    Shahzad, Rahil; Bos, Daniel; Budde, Ricardo P. J.; Pellikaan, Karlijn; Niessen, Wiro J.; van der Lugt, Aad; van Walsum, Theo

    2017-05-01

    Early structural changes to the heart, including the chambers and the coronary arteries, provide important information on pre-clinical heart disease like cardiac failure. Currently, contrast-enhanced cardiac computed tomography angiography (CCTA) is the preferred modality for the visualization of the cardiac chambers and the coronaries. In clinical practice not every patient undergoes a CCTA scan; many patients receive only a non-contrast-enhanced calcium scoring CT scan (CTCS), which has less radiation dose and does not require the administration of contrast agent. Quantifying cardiac structures in such images is challenging, as they lack the contrast present in CCTA scans. Such quantification would however be relevant, as it enables population based studies with only a CTCS scan. The purpose of this work is therefore to investigate the feasibility of automatic segmentation and quantification of cardiac structures viz whole heart, left atrium, left ventricle, right atrium, right ventricle and aortic root from CTCS scans. A fully automatic multi-atlas-based segmentation approach is used to segment the cardiac structures. Results show that the segmentation overlap between the automatic method and that of the reference standard have a Dice similarity coefficient of 0.91 on average for the cardiac chambers. The mean surface-to-surface distance error over all the cardiac structures is 1.4+/- 1.7 mm. The automatically obtained cardiac chamber volumes using the CTCS scans have an excellent correlation when compared to the volumes in corresponding CCTA scans, a Pearson correlation coefficient (R) of 0.95 is obtained. Our fully automatic method enables large-scale assessment of cardiac structures on non-contrast-enhanced CT scans.

  4. Automatic extraction of disease-specific features from Doppler images

    NASA Astrophysics Data System (ADS)

    Negahdar, Mohammadreza; Moradi, Mehdi; Parajuli, Nripesh; Syeda-Mahmood, Tanveer

    2017-03-01

    Flow Doppler imaging is widely used by clinicians to detect diseases of the valves. In particular, continuous wave (CW) Doppler mode scan is routinely done during echocardiography and shows Doppler signal traces over multiple heart cycles. Traditionally, echocardiographers have manually traced such velocity envelopes to extract measurements such as decay time and pressure gradient which are then matched to normal and abnormal values based on clinical guidelines. In this paper, we present a fully automatic approach to deriving these measurements for aortic stenosis retrospectively from echocardiography videos. Comparison of our method with measurements made by echocardiographers shows large agreement as well as identification of new cases missed by echocardiographers.

  5. Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Frollo, Ivan

    2017-12-01

    The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.

  6. Fully automatic detection of salient features in 3-d transesophageal images.

    PubMed

    Curiale, Ariel H; Haak, Alexander; Vegas-Sánchez-Ferrero, Gonzalo; Ren, Ben; Aja-Fernández, Santiago; Bosch, Johan G

    2014-12-01

    Most automated segmentation approaches to the mitral valve and left ventricle in 3-D echocardiography require a manual initialization. In this article, we propose a fully automatic scheme to initialize a multicavity segmentation approach in 3-D transesophageal echocardiography by detecting the left ventricle long axis, the mitral valve and the aortic valve location. Our approach uses a probabilistic and structural tissue classification to find structures such as the mitral and aortic valves; the Hough transform for circles to find the center of the left ventricle; and multidimensional dynamic programming to find the best position for the left ventricle long axis. For accuracy and agreement assessment, the proposed method was evaluated in 19 patients with respect to manual landmarks and as initialization of a multicavity segmentation approach for the left ventricle, the right ventricle, the left atrium, the right atrium and the aorta. The segmentation results revealed no statistically significant differences between manual and automated initialization in a paired t-test (p > 0.05). Additionally, small biases between manual and automated initialization were detected in the Bland-Altman analysis (bias, variance) for the left ventricle (-0.04, 0.10); right ventricle (-0.07, 0.18); left atrium (-0.01, 0.03); right atrium (-0.04, 0.13); and aorta (-0.05, 0.14). These results indicate that the proposed approach provides robust and accurate detection to initialize a multicavity segmentation approach without any user interaction. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  7. Feasibility Study on Fully Automatic High Quality Translation: Volume II. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winifred P.; Stachowitz, Rolf

    This second volume of a two-volume report on a fully automatic high quality translation (FAHQT) contains relevant papers contributed by specialists on the topic of machine translation. The papers presented here cover such topics as syntactical analysis in transformational grammar and in machine translation, lexical features in translation and…

  8. Simultaneous detection of landmarks and key-frame in cardiac perfusion MRI using a joint spatial-temporal context model

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Xue, Hui; Jolly, Marie-Pierre; Guetter, Christoph; Kellman, Peter; Hsu, Li-Yueh; Arai, Andrew; Zuehlsdorff, Sven; Littmann, Arne; Georgescu, Bogdan; Guehring, Jens

    2011-03-01

    Cardiac perfusion magnetic resonance imaging (MRI) has proven clinical significance in diagnosis of heart diseases. However, analysis of perfusion data is time-consuming, where automatic detection of anatomic landmarks and key-frames from perfusion MR sequences is helpful for anchoring structures and functional analysis of the heart, leading toward fully automated perfusion analysis. Learning-based object detection methods have demonstrated their capabilities to handle large variations of the object by exploring a local region, i.e., context. Conventional 2D approaches take into account spatial context only. Temporal signals in perfusion data present a strong cue for anchoring. We propose a joint context model to encode both spatial and temporal evidence. In addition, our spatial context is constructed not only based on the landmark of interest, but also the landmarks that are correlated in the neighboring anatomies. A discriminative model is learned through a probabilistic boosting tree. A marginal space learning strategy is applied to efficiently learn and search in a high dimensional parameter space. A fully automatic system is developed to simultaneously detect anatomic landmarks and key frames in both RV and LV from perfusion sequences. The proposed approach was evaluated on a database of 373 cardiac perfusion MRI sequences from 77 patients. Experimental results of a 4-fold cross validation show superior landmark detection accuracies of the proposed joint spatial-temporal approach to the 2D approach that is based on spatial context only. The key-frame identification results are promising.

  9. A two-stage approach for fully automatic segmentation of venous vascular structures in liver CT images

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Tek, Hüseyin; Aach, Til

    2009-02-01

    The segmentation of the hepatic vascular tree in computed tomography (CT) images is important for many applications such as surgical planning of oncological resections and living liver donations. In surgical planning, vessel segmentation is often used as basis to support the surgeon in the decision about the location of the cut to be performed and the extent of the liver to be removed, respectively. We present a novel approach to hepatic vessel segmentation that can be divided into two stages. First, we detect and delineate the core vessel components efficiently with a high specificity. Second, smaller vessel branches are segmented by a robust vessel tracking technique based on a medialness filter response, which starts from the terminal points of the previously segmented vessels. Specifically, in the first phase major vessels are segmented using the globally optimal graphcuts algorithm in combination with foreground and background seed detection, while the computationally more demanding tracking approach needs to be applied only locally in areas of smaller vessels within the second stage. The method has been evaluated on contrast-enhanced liver CT scans from clinical routine showing promising results. In addition to the fully-automatic instance of this method, the vessel tracking technique can also be used to easily add missing branches/sub-trees to an already existing segmentation result by adding single seed-points.

  10. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection.

    PubMed

    Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George

    2017-06-26

    We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.

  11. SU-C-201-04: Quantification of Perfusion Heterogeneity Based On Texture Analysis for Fully Automatic Detection of Ischemic Deficits From Myocardial Perfusion Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Y; Huang, H; Su, T

    Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less

  12. Automatic segmentation of left ventricle in cardiac cine MRI images based on deep learning

    NASA Astrophysics Data System (ADS)

    Zhou, Tian; Icke, Ilknur; Dogdas, Belma; Parimal, Sarayu; Sampath, Smita; Forbes, Joseph; Bagchi, Ansuman; Chin, Chih-Liang; Chen, Antong

    2017-02-01

    In developing treatment of cardiovascular diseases, short axis cine MRI has been used as a standard technique for understanding the global structural and functional characteristics of the heart, e.g. ventricle dimensions, stroke volume and ejection fraction. To conduct an accurate assessment, heart structures need to be segmented from the cine MRI images with high precision, which could be a laborious task when performed manually. Herein a fully automatic framework is proposed for the segmentation of the left ventricle from the slices of short axis cine MRI scans of porcine subjects using a deep learning approach. For training the deep learning models, which generally requires a large set of data, a public database of human cine MRI scans is used. Experiments on the 3150 cine slices of 7 porcine subjects have shown that when comparing the automatic and manual segmentations the mean slice-wise Dice coefficient is about 0.930, the point-to-curve error is 1.07 mm, and the mean slice-wise Hausdorff distance is around 3.70 mm, which demonstrates the accuracy and robustness of the proposed inter-species translational approach.

  13. A New Semi-Automatic Approach to Find Suitable Virtual Electrodes in Arrays Using an Interpolation Strategy.

    PubMed

    Salchow, Christina; Valtin, Markus; Seel, Thomas; Schauer, Thomas

    2016-06-13

    Functional Electrical Stimulation via electrode arrays enables the user to form virtual electrodes (VEs) of dynamic shape, size, and position. We developed a feedback-control-assisted manual search strategy which allows the therapist to conveniently and continuously modify VEs to find a good stimulation area. This works for applications in which the desired movement consists of at least two degrees of freedom. The virtual electrode can be moved to arbitrary locations within the array, and each involved element is stimulated with an individual intensity. Meanwhile, the applied global stimulation intensity is controlled automatically to meet a predefined angle for one degree of freedom. This enables the therapist to concentrate on the remaining degree(s) of freedom while changing the VE position. This feedback-control-assisted approach aims to integrate the user's opinion and the patient's sensation. Therefore, our method bridges the gap between manual search and fully automatic identification procedures for array electrodes. Measurements in four healthy volunteers were performed to demonstrate the usefulness of our concept, using a 24-element array to generate wrist and hand extension.

  14. Feasibility Study on Fully Automatic High Quality Translation: Volume I. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winifred P.; Stachowitz, Rolf

    The object of this theoretical inquiry is to examine the controversial issue of a fully automatic high quality translation (FAHQT) in the light of past and projected advances in linguistic theory and hardware/software capability. This first volume of a two-volume report discusses the requirements of translation and aspects of human and machine…

  15. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  16. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI.

    PubMed

    Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid

    2016-05-01

    Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalizing neural network

    PubMed Central

    Dietz, Hans Peter; D’hooge, Jan; Barratt, Dean; Deprest, Jan

    2018-01-01

    Abstract. Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams’ index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach. PMID:29340289

  18. Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalizing neural network.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Sindhwani, Nikhil; Dietz, Hans Peter; D'hooge, Jan; Barratt, Dean; Deprest, Jan; Vercauteren, Tom

    2018-04-01

    Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams' index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach.

  19. Automated Verification of Specifications with Typestates and Access Permissions

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.; Catano, Nestor

    2011-01-01

    We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).

  20. Film grain synthesis and its application to re-graining

    NASA Astrophysics Data System (ADS)

    Schallauer, Peter; Mörzinger, Roland

    2006-01-01

    Digital film restoration and special effects compositing require more and more automatic procedures for movie regraining. Missing or inhomogeneous grain decreases perceived quality. For the purpose of grain synthesis an existing texture synthesis algorithm has been evaluated and optimized. We show that this algorithm can produce synthetic grain which is perceptually similar to a given grain template, which has high spatial and temporal variation and which can be applied to multi-spectral images. Furthermore a re-grain application framework is proposed, which synthesises based on an input grain template artificial grain and composites this together with the original image content. Due to its modular approach this framework supports manual as well as automatic re-graining applications. Two example applications are presented, one for re-graining an entire movie and one for fully automatic re-graining of image regions produced by restoration algorithms. Low computational cost of the proposed algorithms allows application in industrial grade software.

  1. Fully automatic cervical vertebrae segmentation framework for X-ray images.

    PubMed

    Al Arif, S M Masudur Rahman; Knapp, Karen; Slabaugh, Greg

    2018-04-01

    The cervical spine is a highly flexible anatomy and therefore vulnerable to injuries. Unfortunately, a large number of injuries in lateral cervical X-ray images remain undiagnosed due to human errors. Computer-aided injury detection has the potential to reduce the risk of misdiagnosis. Towards building an automatic injury detection system, in this paper, we propose a deep learning-based fully automatic framework for segmentation of cervical vertebrae in X-ray images. The framework first localizes the spinal region in the image using a deep fully convolutional neural network. Then vertebra centers are localized using a novel deep probabilistic spatial regression network. Finally, a novel shape-aware deep segmentation network is used to segment the vertebrae in the image. The framework can take an X-ray image and produce a vertebrae segmentation result without any manual intervention. Each block of the fully automatic framework has been trained on a set of 124 X-ray images and tested on another 172 images, all collected from real-life hospital emergency rooms. A Dice similarity coefficient of 0.84 and a shape error of 1.69 mm have been achieved. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Automatic iterative segmentation of multiple sclerosis lesions using Student's t mixture models and probabilistic anatomical atlases in FLAIR images.

    PubMed

    Freire, Paulo G L; Ferrari, Ricardo J

    2016-06-01

    Multiple sclerosis (MS) is a demyelinating autoimmune disease that attacks the central nervous system (CNS) and affects more than 2 million people worldwide. The segmentation of MS lesions in magnetic resonance imaging (MRI) is a very important task to assess how a patient is responding to treatment and how the disease is progressing. Computational approaches have been proposed over the years to segment MS lesions and reduce the amount of time spent on manual delineation and inter- and intra-rater variability and bias. However, fully-automatic segmentation of MS lesions still remains an open problem. In this work, we propose an iterative approach using Student's t mixture models and probabilistic anatomical atlases to automatically segment MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) images. Our technique resembles a refinement approach by iteratively segmenting brain tissues into smaller classes until MS lesions are grouped as the most hyperintense one. To validate our technique we used 21 clinical images from the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge dataset. Evaluation using Dice Similarity Coefficient (DSC), True Positive Ratio (TPR), False Positive Ratio (FPR), Volume Difference (VD) and Pearson's r coefficient shows that our technique has a good spatial and volumetric agreement with raters' manual delineations. Also, a comparison between our proposal and the state-of-the-art shows that our technique is comparable and, in some cases, better than some approaches, thus being a viable alternative for automatic MS lesion segmentation in MRI. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  4. TU-H-CAMPUS-JeP1-02: Fully Automatic Verification of Automatically Contoured Normal Tissues in the Head and Neck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarroll, R; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX; Beadle, B

    Purpose: To investigate and validate the use of an independent deformable-based contouring algorithm for automatic verification of auto-contoured structures in the head and neck towards fully automated treatment planning. Methods: Two independent automatic contouring algorithms [(1) Eclipse’s Smart Segmentation followed by pixel-wise majority voting, (2) an in-house multi-atlas based method] were used to create contours of 6 normal structures of 10 head-and-neck patients. After rating by a radiation oncologist, the higher performing algorithm was selected as the primary contouring method, the other used for automatic verification of the primary. To determine the ability of the verification algorithm to detect incorrectmore » contours, contours from the primary method were shifted from 0.5 to 2cm. Using a logit model the structure-specific minimum detectable shift was identified. The models were then applied to a set of twenty different patients and the sensitivity and specificity of the models verified. Results: Per physician rating, the multi-atlas method (4.8/5 point scale, with 3 rated as generally acceptable for planning purposes) was selected as primary and the Eclipse-based method (3.5/5) for verification. Mean distance to agreement and true positive rate were selected as covariates in an optimized logit model. These models, when applied to a group of twenty different patients, indicated that shifts could be detected at 0.5cm (brain), 0.75cm (mandible, cord), 1cm (brainstem, cochlea), or 1.25cm (parotid), with sensitivity and specificity greater than 0.95. If sensitivity and specificity constraints are reduced to 0.9, detectable shifts of mandible and brainstem were reduced by 0.25cm. These shifts represent additional safety margins which might be considered if auto-contours are used for automatic treatment planning without physician review. Conclusion: Automatically contoured structures can be automatically verified. This fully automated process could be used to flag auto-contours for special review or used with safety margins in a fully automatic treatment planning system.« less

  5. Differential GPS/inertial navigation approach/landing flight test results

    NASA Technical Reports Server (NTRS)

    Snyder, Scott; Schipper, Brian; Vallot, Larry; Parker, Nigel; Spitzer, Cary

    1992-01-01

    In November of 1990 a joint Honeywell/NASA-Langley differential GPS/inertial flight test was conducted at Wallops Island, Virginia. The test objective was to acquire a system performance database and demonstrate automatic landing using an integrated differential GPS/INS (Global Positioning System/inertial navigation system) with barometric and radar altimeters. The flight test effort exceeded program objectives with over 120 landings, 36 of which were fully automatic differential GPS/inertial landings. Flight test results obtained from post-flight data analysis are discussed. These results include characteristics of differential GPS/inertial error, using the Wallops Island Laser Tracker as a reference. Data on the magnitude of the differential corrections and vertical channel performance with and without radar altimeter augmentation are provided.

  6. Fully distributed absolute blood flow velocity measurement for middle cerebral arteries using Doppler optical coherence tomography

    PubMed Central

    Qi, Li; Zhu, Jiang; Hancock, Aneeka M.; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D.; Chen, Zhongping

    2016-01-01

    Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement. PMID:26977365

  7. Fully distributed absolute blood flow velocity measurement for middle cerebral arteries using Doppler optical coherence tomography.

    PubMed

    Qi, Li; Zhu, Jiang; Hancock, Aneeka M; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D; Chen, Zhongping

    2016-02-01

    Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement.

  8. Automatic analysis of dividing cells in live cell movies to detect mitotic delays and correlate phenotypes in time.

    PubMed

    Harder, Nathalie; Mora-Bermúdez, Felipe; Godinez, William J; Wünsche, Annelie; Eils, Roland; Ellenberg, Jan; Rohr, Karl

    2009-11-01

    Live-cell imaging allows detailed dynamic cellular phenotyping for cell biology and, in combination with small molecule or drug libraries, for high-content screening. Fully automated analysis of live cell movies has been hampered by the lack of computational approaches that allow tracking and recognition of individual cell fates over time in a precise manner. Here, we present a fully automated approach to analyze time-lapse movies of dividing cells. Our method dynamically categorizes cells into seven phases of the cell cycle and five aberrant morphological phenotypes over time. It reliably tracks cells and their progeny and can thus measure the length of mitotic phases and detect cause and effect if mitosis goes awry. We applied our computational scheme to annotate mitotic phenotypes induced by RNAi gene knockdown of CKAP5 (also known as ch-TOG) or by treatment with the drug nocodazole. Our approach can be readily applied to comparable assays aiming at uncovering the dynamic cause of cell division phenotypes.

  9. Automatic segmentation of cerebral white matter hyperintensities using only 3D FLAIR images.

    PubMed

    Simões, Rita; Mönninghoff, Christoph; Dlugaj, Martha; Weimar, Christian; Wanke, Isabel; van Cappellen van Walsum, Anne-Marie; Slump, Cornelis

    2013-09-01

    Magnetic Resonance (MR) white matter hyperintensities have been shown to predict an increased risk of developing cognitive decline. However, their actual role in the conversion to dementia is still not fully understood. Automatic segmentation methods can help in the screening and monitoring of Mild Cognitive Impairment patients who take part in large population-based studies. Most existing segmentation approaches use multimodal MR images. However, multiple acquisitions represent a limitation in terms of both patient comfort and computational complexity of the algorithms. In this work, we propose an automatic lesion segmentation method that uses only three-dimensional fluid-attenuation inversion recovery (FLAIR) images. We use a modified context-sensitive Gaussian mixture model to determine voxel class probabilities, followed by correction of FLAIR artifacts. We evaluate the method against the manual segmentation performed by an experienced neuroradiologist and compare the results with other unimodal segmentation approaches. Finally, we apply our method to the segmentation of multiple sclerosis lesions by using a publicly available benchmark dataset. Results show a similar performance to other state-of-the-art multimodal methods, as well as to the human rater. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Design and flight test of a differential GPS/inertial navigation system for approach/landing guidance

    NASA Technical Reports Server (NTRS)

    Vallot, Lawrence; Snyder, Scott; Schipper, Brian; Parker, Nigel; Spitzer, Cary

    1991-01-01

    NASA-Langley has conducted a flight test program evaluating a differential GPS/inertial navigation system's (DGPS/INS) utility as an approach/landing aid. The DGPS/INS airborne and ground components are based on off-the-shelf transport aircraft avionics, namely a global positioning/inertial reference unit (GPIRU) and two GPS sensor units (GPSSUs). Systematic GPS errors are measured by the ground GPSSU and transmitted to the aircraft GPIRU, allowing the errors to be eliminated or greatly reduced in the airborne equipment. Over 120 landings were flown; 36 of these were fully automatic DGPS/INS landings.

  11. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  12. Sentry: An Automated Close Approach Monitoring System for Near-Earth Objects

    NASA Astrophysics Data System (ADS)

    Chamberlin, A. B.; Chesley, S. R.; Chodas, P. W.; Giorgini, J. D.; Keesey, M. S.; Wimberly, R. N.; Yeomans, D. K.

    2001-11-01

    In response to international concern about potential asteroid impacts on Earth, NASA's Near-Earth Object (NEO) Program Office has implemented a new system called ``Sentry'' to automatically update the orbits of all NEOs on a daily basis and compute Earth close approaches up to 100 years into the future. Results are published on our web site (http://neo.jpl.nasa.gov/) and updated orbits and ephemerides made available via the JPL Horizons ephemeris service (http://ssd.jpl.nasa.gov/horizons.html). Sentry collects new and revised astrometric observations from the Minor Planet Center (MPC) via their electronic circulars (MPECs) in near real time as well as radar and optical astrometry sent directly from observers. NEO discoveries and identifications are detected in MPECs and processed appropriately. In addition to these daily updates, Sentry synchronizes with each monthly batch of MPC astrometry and automatically updates all NEO observation files. Daily and monthly processing of NEO astrometry is managed using a queuing system which allows for manual intervention of selected NEOs without interfering with the automatic system. At the heart of Sentry is a fully automatic orbit determination program which handles outlier rejection and ensures convergence in the new solution. Updated orbital elements and their covariances are published via Horizons and our NEO web site, typically within 24 hours. A new version of Horizons, in development, will allow computation of ephemeris uncertainties using covariance data. The positions of NEOs with updated orbits are numerically integrated up to 100 years into the future and each close approach to any perturbing body in our dynamic model (all planets, Moon, Ceres, Pallas, Vesta) is recorded. Significant approaches are flagged for extended analysis including Monte Carlo studies. Results, such as minimum encounter distances and future Earth impact probabilities, are published on our NEO web site.

  13. Automatic measurement; Mesures automatiques (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ringeard, C.

    1974-11-28

    By its ability to link-up operations sequentially and memorize the data collected, the computer can introduce a statistical approach in the evaluation of a result. To benefit fully from the advantages of automation, a special effort was made to reduce the programming time to a minimum and to simplify link-ups between the existing system and instruments from different sources. The practical solution of the test laboratory of the C.E.A. Centralized Administration Groupe (GEC) is given.

  14. Volumetric vessel reconstruction method for absolute blood flow velocity measurement in Doppler OCT images

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhu, Jiang; Hancock, Aneeka M.; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D.; Chen, Zhongping

    2017-02-01

    Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it not only relates to the properties of the laser and the scattering particles, but also relates to the geometry of both directions of the laser beam and the flow. In this paper, focusing on the analysis of cerebral hemodynamics, we presents a method to quantify the total absolute blood flow velocity in middle cerebral artery (MCA) based on volumetric vessel reconstruction from pure DOCT images. A modified region growing segmentation method is first used to localize the MCA on successive DOCT B-scan images. Vessel skeletonization, followed by an averaging gradient angle calculation method, is then carried out to obtain Doppler angles along the entire MCA. Once the Doppler angles are determined, the absolute blood flow velocity of each position on the MCA is easily found. Given a seed point position on the MCA, our approach could achieve automatic quantification of the fully distributed absolute BFV. Based on experiments conducted using a swept-source optical coherence tomography system, our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches in the rodent brain.

  15. Generalized Self-Organizing Maps for Automatic Determination of the Number of Clusters and Their Multiprototypes in Cluster Analysis.

    PubMed

    Gorzalczany, Marian B; Rudzinski, Filip

    2017-06-07

    This paper presents a generalization of self-organizing maps with 1-D neighborhoods (neuron chains) that can be effectively applied to complex cluster analysis problems. The essence of the generalization consists in introducing mechanisms that allow the neuron chain--during learning--to disconnect into subchains, to reconnect some of the subchains again, and to dynamically regulate the overall number of neurons in the system. These features enable the network--working in a fully unsupervised way (i.e., using unlabeled data without a predefined number of clusters)--to automatically generate collections of multiprototypes that are able to represent a broad range of clusters in data sets. First, the operation of the proposed approach is illustrated on some synthetic data sets. Then, this technique is tested using several real-life, complex, and multidimensional benchmark data sets available from the University of California at Irvine (UCI) Machine Learning repository and the Knowledge Extraction based on Evolutionary Learning data set repository. A sensitivity analysis of our approach to changes in control parameters and a comparative analysis with an alternative approach are also performed.

  16. Automatic segmentation of 4D cardiac MR images for extraction of ventricular chambers using a spatio-temporal approach

    NASA Astrophysics Data System (ADS)

    Atehortúa, Angélica; Zuluaga, Maria A.; Ourselin, Sébastien; Giraldo, Diana; Romero, Eduardo

    2016-03-01

    An accurate ventricular function quantification is important to support evaluation, diagnosis and prognosis of several cardiac pathologies. However, expert heart delineation, specifically for the right ventricle, is a time consuming task with high inter-and-intra observer variability. A fully automatic 3D+time heart segmentation framework is herein proposed for short-axis-cardiac MRI sequences. This approach estimates the heart using exclusively information from the sequence itself without tuning any parameters. The proposed framework uses a coarse-to-fine approach, which starts by localizing the heart via spatio-temporal analysis, followed by a segmentation of the basal heart that is then propagated to the apex by using a non-rigid-registration strategy. The obtained volume is then refined by estimating the ventricular muscle by locally searching a prior endocardium- pericardium intensity pattern. The proposed framework was applied to 48 patients datasets supplied by the organizers of the MICCAI 2012 Right Ventricle segmentation challenge. Results show the robustness, efficiency and competitiveness of the proposed method both in terms of accuracy and computational load.

  17. Automated Analysis of siRNA Screens of Virus Infected Cells Based on Immunofluorescence Microscopy

    NASA Astrophysics Data System (ADS)

    Matula, Petr; Kumar, Anil; Wörz, Ilka; Harder, Nathalie; Erfle, Holger; Bartenschlager, Ralf; Eils, Roland; Rohr, Karl

    We present an image analysis approach as part of a high-throughput microscopy screening system based on cell arrays for the identification of genes involved in Hepatitis C and Dengue virus replication. Our approach comprises: cell nucleus segmentation, quantification of virus replication level in cells, localization of regions with transfected cells, cell classification by infection status, and quality assessment of an experiment. The approach is fully automatic and has been successfully applied to a large number of cell array images from screening experiments. The experimental results show a good agreement with the expected behavior of positive as well as negative controls and encourage the application to screens from further high-throughput experiments.

  18. Detection and measurement of fetal anatomies from ultrasound images using a constrained probabilistic boosting tree.

    PubMed

    Carneiro, Gustavo; Georgescu, Bogdan; Good, Sara; Comaniciu, Dorin

    2008-09-01

    We propose a novel method for the automatic detection and measurement of fetal anatomical structures in ultrasound images. This problem offers a myriad of challenges, including: difficulty of modeling the appearance variations of the visual object of interest, robustness to speckle noise and signal dropout, and large search space of the detection procedure. Previous solutions typically rely on the explicit encoding of prior knowledge and formulation of the problem as a perceptual grouping task solved through clustering or variational approaches. These methods are constrained by the validity of the underlying assumptions and usually are not enough to capture the complex appearances of fetal anatomies. We propose a novel system for fast automatic detection and measurement of fetal anatomies that directly exploits a large database of expert annotated fetal anatomical structures in ultrasound images. Our method learns automatically to distinguish between the appearance of the object of interest and background by training a constrained probabilistic boosting tree classifier. This system is able to produce the automatic segmentation of several fetal anatomies using the same basic detection algorithm. We show results on fully automatic measurement of biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), femur length (FL), humerus length (HL), and crown rump length (CRL). Notice that our approach is the first in the literature to deal with the HL and CRL measurements. Extensive experiments (with clinical validation) show that our system is, on average, close to the accuracy of experts in terms of segmentation and obstetric measurements. Finally, this system runs under half second on a standard dual-core PC computer.

  19. Automatic Detection and Vulnerability Analysis of Areas Endangered by Heavy Rain

    NASA Astrophysics Data System (ADS)

    Krauß, Thomas; Fischer, Peter

    2016-08-01

    In this paper we present a new method for fully automatic detection and derivation of areas endangered by heavy rainfall based only on digital elevation models. Tracking news show that the majority of occuring natural hazards are flood events. So already many flood prediction systems were developed. But most of these existing systems for deriving areas endangered by flooding events are based only on horizontal and vertical distances to existing rivers and lakes. Typically such systems take not into account dangers arising directly from heavy rain events. In a study conducted by us together with a german insurance company a new approach for detection of areas endangered by heavy rain was proven to give a high correlation of the derived endangered areas and the losses claimed at the insurance company. Here we describe three methods for classification of digital terrain models and analyze their usability for automatic detection and vulnerability analysis for areas endangered by heavy rainfall and analyze the results using the available insurance data.

  20. Semi-automatic forensic approach using mandibular midline lingual structures as fingerprint: a pilot study.

    PubMed

    Shaheen, E; Mowafy, B; Politis, C; Jacobs, R

    2017-12-01

    Previous research proposed the use of the mandibular midline neurovascular canal structures as a forensic finger print. In their observer study, an average correct identification of 95% was reached which triggered this study. To present a semi-automatic computer recognition approach to replace the observers and to validate the accuracy of this newly proposed method. Imaging data from Computer Tomography (CT) and Cone Beam Computer Tomography (CBCT) of mandibles scanned at two different moments were collected to simulate an AM and PM situation where the first scan presented AM and the second scan was used to simulate PM. Ten cases with 20 scans were used to build a classifier which relies on voxel based matching and results with classification into one of two groups: "Unmatched" and "Matched". This protocol was then tested using five other scans out of the database. Unpaired t-testing was applied and accuracy of the computerized approach was determined. A significant difference was found between the "Unmatched" and "Matched" classes with means of 0.41 and 0.86 respectively. Furthermore, the testing phase showed an accuracy of 100%. The validation of this method pushes this protocol further to a fully automatic identification procedure for victim identification based on the mandibular midline canals structures only in cases with available AM and PM CBCT/CT data.

  1. A Patch-Based Approach for the Segmentation of Pathologies: Application to Glioma Labelling.

    PubMed

    Cordier, Nicolas; Delingette, Herve; Ayache, Nicholas

    2016-04-01

    In this paper, we describe a novel and generic approach to address fully-automatic segmentation of brain tumors by using multi-atlas patch-based voting techniques. In addition to avoiding the local search window assumption, the conventional patch-based framework is enhanced through several simple procedures: an improvement of the training dataset in terms of both label purity and intensity statistics, augmented features to implicitly guide the nearest-neighbor-search, multi-scale patches, invariance to cube isometries, stratification of the votes with respect to cases and labels. A probabilistic model automatically delineates regions of interest enclosing high-probability tumor volumes, which allows the algorithm to achieve highly competitive running time despite minimal processing power and resources. This method was evaluated on Multimodal Brain Tumor Image Segmentation challenge datasets. State-of-the-art results are achieved, with a limited learning stage thus restricting the risk of overfit. Moreover, segmentation smoothness does not involve any post-processing.

  2. Automatic systems and the low-level wind hazard

    NASA Technical Reports Server (NTRS)

    Schaeffer, Dwight R.

    1987-01-01

    Automatic flight control systems provide means for significantly enhancing survivability in severe wind hazards. The technology required to produce the necessary control algorithms is available and has been made technically feasible by the advent of digital flight control systems and accurate, low-noise sensors, especially strap-down inertial sensors. The application of this technology and these means has not generally been enabled except for automatic landing systems, and even then the potential has not been fully exploited. To fully exploit the potential of automatic systems for enhancing safety in wind hazards requires providing incentives, creating demand, inspiring competition, education, and eliminating prejudicial disincentitives to overcome the economic penalties associated with the extensive and riskly development and certification of these systems. If these changes will come about at all, it will likely be through changes in the regulations provided by the certifying agencies.

  3. Automatic brain tumor segmentation with a fast Mumford-Shah algorithm

    NASA Astrophysics Data System (ADS)

    Müller, Sabine; Weickert, Joachim; Graf, Norbert

    2016-03-01

    We propose a fully-automatic method for brain tumor segmentation that does not require any training phase. Our approach is based on a sequence of segmentations using the Mumford-Shah cartoon model with varying parameters. In order to come up with a very fast implementation, we extend the recent primal-dual algorithm of Strekalovskiy et al. (2014) from the 2D to the medically relevant 3D setting. Moreover, we suggest a new confidence refinement and show that it can increase the precision of our segmentations substantially. Our method is evaluated on 188 data sets with high-grade gliomas and 25 with low-grade gliomas from the BraTS14 database. Within a computation time of only three minutes, we achieve Dice scores that are comparable to state-of-the-art methods.

  4. Markov random field based automatic image alignment for electron tomography.

    PubMed

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  5. Fully Automatic Segmentation of Fluorescein Leakage in Subjects With Diabetic Macular Edema

    PubMed Central

    Rabbani, Hossein; Allingham, Michael J.; Mettu, Priyatham S.; Cousins, Scott W.; Farsiu, Sina

    2015-01-01

    Purpose. To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Methods. Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. Results. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Conclusions. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. PMID:25634978

  6. Fully automatic segmentation of fluorescein leakage in subjects with diabetic macular edema.

    PubMed

    Rabbani, Hossein; Allingham, Michael J; Mettu, Priyatham S; Cousins, Scott W; Farsiu, Sina

    2015-01-29

    To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  7. [The mediating role of anger in the relationship between automatic thoughts and physical aggression in adolescents].

    PubMed

    Yavuzer, Yasemin; Karataş, Zeynep

    2013-01-01

    This study aimed to examine the mediating role of anger in the relationship between automatic thoughts and physical aggression in adolescents. The study included 224 adolescents in the 9th grade of 3 different high schools in central Burdur during the 2011-2012 academic year. Participants completed the Aggression Questionnaire and Automatic Thoughts Scale in their classrooms during counseling sessions. Data were analyzed using simple and multiple linear regression analysis. There were positive correlations between the adolescents' automatic thoughts, and physical aggression, and anger. According to regression analysis, automatic thoughts effectively predicted the level of physical aggression (b= 0.233, P < 0.001)) and anger (b= 0.325, P < 0.001). Analysis of the mediating role of anger showed that anger fully mediated the relationship between automatic thoughts and physical aggression (Sobel z = 5.646, P < 0.001). Anger fully mediated the relationship between automatic thoughts and physical aggression. Providing adolescents with anger management skills training is very important for the prevention of physical aggression. Such training programs should include components related to the development of an awareness of dysfunctional and anger-triggering automatic thoughts, and how to change them. As the study group included adolescents from Burdur, the findings can only be generalized to groups with similar characteristics.

  8. Drag and drop simulation: from pictures to full three-dimensional simulations

    NASA Astrophysics Data System (ADS)

    Bergmann, Michel; Iollo, Angelo

    2014-11-01

    We present a suite of methods to achieve ``drag and drop'' simulation, i.e., to fully automatize the process to perform thee-dimensional flow simulations around a bodies defined by actual images of moving objects. The overall approach requires a skeleton graph generation to get level set function from pictures, optimal transportation to get body velocity on the surface and then flow simulation thanks to a cartesian method based on penalization. We illustrate this paradigm simulating the swimming of a mackerel fish.

  9. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    PubMed

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  10. Fully convolutional neural network for removing background in noisy images of uranium bearing particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarolli, Jay G.; Naes, Benjamin E.; Butler, Lamar

    A fully convolutional neural network (FCN) was developed to supersede automatic or manual thresholding algorithms used for tabulating SIMS particle search data. The FCN was designed to perform a binary classification of pixels in each image belonging to a particle or not, thereby effectively removing background signal without manually or automatically determining an intensity threshold. Using 8,000 images from 28 different particle screening analyses, the FCN was trained to accurately predict pixels belonging to a particle with near 99% accuracy. Background eliminated images were then segmented using a watershed technique in order to determine isotopic ratios of particles. A comparisonmore » of the isotopic distributions of an independent data set segmented using the neural network, compared to a commercially available automated particle measurement (APM) program developed by CAMECA, highlighted the necessity for effective background removal to ensure that resulting particle identification is not only accurate, but preserves valuable signal that could be lost due to improper segmentation. The FCN approach improves the robustness of current state-of-the-art particle searching algorithms by reducing user input biases, resulting in an improved absolute signal per particle and decreased uncertainty of the determined isotope ratios.« less

  11. Agile Multi-Scale Decompositions for Automatic Image Registration

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline

    2016-01-01

    In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.

  12. Automatic soldering machine

    NASA Technical Reports Server (NTRS)

    Stein, J. A.

    1974-01-01

    Fully-automatic tube-joint soldering machine can be used to make leakproof joints in aluminum tubes of 3/16 to 2 in. in diameter. Machine consists of temperature-control unit, heater transformer and heater head, vibrator, and associated circuitry controls, and indicators.

  13. Real-time piloted simulation of fully automatic guidance and control for rotorcraft nap-of-the-earth (NOE) flight following planned profiles

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Pater J.; Jewell, Wayne F.; Coppenbarger, Richard

    1990-01-01

    Developing a single-pilot all-weather NOE capability requires fully automatic NOE navigation and flight control. Innovative guidance and control concepts are being investigated to (1) organize the onboard computer-based storage and real-time updating of NOE terrain profiles and obstacles; (2) define a class of automatic anticipative pursuit guidance algorithms to follow the vertical, lateral, and longitudinal guidance commands; (3) automate a decision-making process for unexpected obstacle avoidance; and (4) provide several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the recorded environment which is then used to determine an appropriate evasive maneuver if a nonconformity is observed. This research effort has been evaluated in both fixed-base and moving-base real-time piloted simulations thereby evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and reengagement of the automatic system.

  14. Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist

    NASA Astrophysics Data System (ADS)

    Tummala, Sudhakar; Dam, Erik B.

    2010-03-01

    Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.

  15. Learning Biological Networks via Bootstrapping with Optimized GO-based Gene Similarity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Ronald C.; Sanfilippo, Antonio P.; McDermott, Jason E.

    2010-08-02

    Microarray gene expression data provide a unique information resource for learning biological networks using "reverse engineering" methods. However, there are a variety of cases in which we know which genes are involved in a given pathology of interest, but we do not have enough experimental evidence to support the use of fully-supervised/reverse-engineering learning methods. In this paper, we explore a novel semi-supervised approach in which biological networks are learned from a reference list of genes and a partial set of links for these genes extracted automatically from PubMed abstracts, using a knowledge-driven bootstrapping algorithm. We show how new relevant linksmore » across genes can be iteratively derived using a gene similarity measure based on the Gene Ontology that is optimized on the input network at each iteration. We describe an application of this approach to the TGFB pathway as a case study and show how the ensuing results prove the feasibility of the approach as an alternate or complementary technique to fully supervised methods.« less

  16. Esophagus segmentation in CT via 3D fully convolutional neural network and random walk.

    PubMed

    Fechter, Tobias; Adebahr, Sonja; Baltas, Dimos; Ben Ayed, Ismail; Desrosiers, Christian; Dolz, Jose

    2017-12-01

    Precise delineation of organs at risk is a crucial task in radiotherapy treatment planning for delivering high doses to the tumor while sparing healthy tissues. In recent years, automated segmentation methods have shown an increasingly high performance for the delineation of various anatomical structures. However, this task remains challenging for organs like the esophagus, which have a versatile shape and poor contrast to neighboring tissues. For human experts, segmenting the esophagus from CT images is a time-consuming and error-prone process. To tackle these issues, we propose a random walker approach driven by a 3D fully convolutional neural network (CNN) to automatically segment the esophagus from CT images. First, a soft probability map is generated by the CNN. Then, an active contour model (ACM) is fitted to the CNN soft probability map to get a first estimation of the esophagus location. The outputs of the CNN and ACM are then used in conjunction with a probability model based on CT Hounsfield (HU) values to drive the random walker. Training and evaluation were done on 50 CTs from two different datasets, with clinically used peer-reviewed esophagus contours. Results were assessed regarding spatial overlap and shape similarity. The esophagus contours generated by the proposed algorithm showed a mean Dice coefficient of 0.76 ± 0.11, an average symmetric square distance of 1.36 ± 0.90 mm, and an average Hausdorff distance of 11.68 ± 6.80, compared to the reference contours. These results translate to a very good agreement with reference contours and an increase in accuracy compared to existing methods. Furthermore, when considering the results reported in the literature for the publicly available Synapse dataset, our method outperformed all existing approaches, which suggests that the proposed method represents the current state-of-the-art for automatic esophagus segmentation. We show that a CNN can yield accurate estimations of esophagus location, and that the results of this model can be refined by a random walk step taking pixel intensities and neighborhood relationships into account. One of the main advantages of our network over previous methods is that it performs 3D convolutions, thus fully exploiting the 3D spatial context and performing an efficient volume-wise prediction. The whole segmentation process is fully automatic and yields esophagus delineations in very good agreement with the gold standard, showing that it can compete with previously published methods. © 2017 American Association of Physicists in Medicine.

  17. Markerless 3D motion capture for animal locomotion studies

    PubMed Central

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  18. Automatic detection of articulation disorders in children with cleft lip and palate.

    PubMed

    Maier, Andreas; Hönig, Florian; Bocklet, Tobias; Nöth, Elmar; Stelzle, Florian; Nkenke, Emeka; Schuster, Maria

    2009-11-01

    Speech of children with cleft lip and palate (CLP) is sometimes still disordered even after adequate surgical and nonsurgical therapies. Such speech shows complex articulation disorders, which are usually assessed perceptually, consuming time and manpower. Hence, there is a need for an easy to apply and reliable automatic method. To create a reference for an automatic system, speech data of 58 children with CLP were assessed perceptually by experienced speech therapists for characteristic phonetic disorders at the phoneme level. The first part of the article aims to detect such characteristics by a semiautomatic procedure and the second to evaluate a fully automatic, thus simple, procedure. The methods are based on a combination of speech processing algorithms. The semiautomatic method achieves moderate to good agreement (kappa approximately 0.6) for the detection of all phonetic disorders. On a speaker level, significant correlations between the perceptual evaluation and the automatic system of 0.89 are obtained. The fully automatic system yields a correlation on the speaker level of 0.81 to the perceptual evaluation. This correlation is in the range of the inter-rater correlation of the listeners. The automatic speech evaluation is able to detect phonetic disorders at an experts'level without any additional human postprocessing.

  19. Automatic short axis orientation of the left ventricle in 3D ultrasound recordings

    NASA Astrophysics Data System (ADS)

    Pedrosa, João.; Heyde, Brecht; Heeren, Laurens; Engvall, Jan; Zamorano, Jose; Papachristidis, Alexandros; Edvardsen, Thor; Claus, Piet; D'hooge, Jan

    2016-04-01

    The recent advent of three-dimensional echocardiography has led to an increased interest from the scientific community in left ventricle segmentation frameworks for cardiac volume and function assessment. An automatic orientation of the segmented left ventricular mesh is an important step to obtain a point-to-point correspondence between the mesh and the cardiac anatomy. Furthermore, this would allow for an automatic division of the left ventricle into the standard 17 segments and, thus, fully automatic per-segment analysis, e.g. regional strain assessment. In this work, a method for fully automatic short axis orientation of the segmented left ventricle is presented. The proposed framework aims at detecting the inferior right ventricular insertion point. 211 three-dimensional echocardiographic images were used to validate this framework by comparison to manual annotation of the inferior right ventricular insertion point. A mean unsigned error of 8, 05° +/- 18, 50° was found, whereas the mean signed error was 1, 09°. Large deviations between the manual and automatic annotations (> 30°) only occurred in 3, 79% of cases. The average computation time was 666ms in a non-optimized MATLAB environment, which potentiates real-time application. In conclusion, a successful automatic real-time method for orientation of the segmented left ventricle is proposed.

  20. FAMA: An automatic code for stellar parameter and abundance determination

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2013-10-01

    Context. The large amount of spectra obtained during the epoch of extensive spectroscopic surveys of Galactic stars needs the development of automatic procedures to derive their atmospheric parameters and individual element abundances. Aims: Starting from the widely-used code MOOG by C. Sneden, we have developed a new procedure to determine atmospheric parameters and abundances in a fully automatic way. The code FAMA (Fast Automatic MOOG Analysis) is presented describing its approach to derive atmospheric stellar parameters and element abundances. The code, freely distributed, is written in Perl and can be used on different platforms. Methods: The aim of FAMA is to render the computation of the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) as automatic and as independent of any subjective approach as possible. It is based on the simultaneous search for three equilibria: excitation equilibrium, ionization balance, and the relationship between log n(Fe i) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. The convergence criteria are not fixed "a priori" but are based on the quality of the spectra. Results: In this paper we present tests performed on the solar spectrum EWs that assess the method's dependency on the initial parameters and we analyze a sample of stars observed in Galactic open and globular clusters. The current version of FAMA is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/558/A38

  1. Fully Automated Single-Zone Elliptic Grid Generation for Mars Science Laboratory (MSL) Aeroshell and Canopy Geometries

    NASA Technical Reports Server (NTRS)

    kaul, Upender K.

    2008-01-01

    A procedure for generating smooth uniformly clustered single-zone grids using enhanced elliptic grid generation has been demonstrated here for the Mars Science Laboratory (MSL) geometries such as aeroshell and canopy. The procedure obviates the need for generating multizone grids for such geometries, as reported in the literature. This has been possible because the enhanced elliptic grid generator automatically generates clustered grids without manual prescription of decay parameters needed with the conventional approach. In fact, these decay parameters are calculated as decay functions as part of the solution, and they are not constant over a given boundary. Since these decay functions vary over a given boundary, orthogonal grids near any arbitrary boundary can be clustered automatically without having to break up the boundaries and the corresponding interior domains into various zones for grid generation.

  2. Zebra Crossing Spotter: Automatic Population of Spatial Databases for Increased Safety of Blind Travelers

    PubMed Central

    Ahmetovic, Dragan; Manduchi, Roberto; Coughlan, James M.; Mascetti, Sergio

    2016-01-01

    In this paper we propose a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Knowing the location of crosswalks is critical for a blind person planning a trip that includes street crossing. By augmenting existing spatial databases (such as Google Maps or OpenStreetMap) with this information, a blind traveler may make more informed routing decisions, resulting in greater safety during independent travel. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm could also be complemented by a final crowdsourcing validation stage for increased accuracy. PMID:26824080

  3. Intra-operative adjustment of standard planes in C-arm CT image data.

    PubMed

    Brehler, Michael; Görres, Joseph; Franke, Jochen; Barth, Karl; Vetter, Sven Y; Grützner, Paul A; Meinzer, Hans-Peter; Wolf, Ivo; Nabers, Diana

    2016-03-01

    With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.

  4. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  5. Automatic Semantic Segmentation of Brain Gliomas from MRI Images Using a Deep Cascaded Neural Network.

    PubMed

    Cui, Shaoguo; Mao, Lei; Jiang, Jingfeng; Liu, Chang; Xiong, Shuyu

    2018-01-01

    Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.

  6. Fully automatic guidance and control for rotorcraft nap-of-the-Earth flight following planned profiles. Volume 1: Real-time piloted simulation

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.

    1991-01-01

    Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-based and moving-based real-time piloted simulations, thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.

  7. Fully Automatic Guidance and Control for Rotorcraft Nap-of-the-earth Flight Following Planned Profiles. Volume 2: Mathematical Model

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.

    1991-01-01

    Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-base and moving-base real-time piloted simulations; thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.

  8. Automatic switching matrix

    DOEpatents

    Schlecht, Martin F.; Kassakian, John G.; Caloggero, Anthony J.; Rhodes, Bruce; Otten, David; Rasmussen, Neil

    1982-01-01

    An automatic switching matrix that includes an apertured matrix board containing a matrix of wires that can be interconnected at each aperture. Each aperture has associated therewith a conductive pin which, when fully inserted into the associated aperture, effects electrical connection between the wires within that particular aperture. Means is provided for automatically inserting the pins in a determined pattern and for removing all the pins to permit other interconnecting patterns.

  9. Automatic evidence quality prediction to support evidence-based decision making.

    PubMed

    Sarker, Abeed; Mollá, Diego; Paris, Cécile

    2015-06-01

    Evidence-based medicine practice requires practitioners to obtain the best available medical evidence, and appraise the quality of the evidence when making clinical decisions. Primarily due to the plethora of electronically available data from the medical literature, the manual appraisal of the quality of evidence is a time-consuming process. We present a fully automatic approach for predicting the quality of medical evidence in order to aid practitioners at point-of-care. Our approach extracts relevant information from medical article abstracts and utilises data from a specialised corpus to apply supervised machine learning for the prediction of the quality grades. Following an in-depth analysis of the usefulness of features (e.g., publication types of articles), they are extracted from the text via rule-based approaches and from the meta-data associated with the articles, and then applied in the supervised classification model. We propose the use of a highly scalable and portable approach using a sequence of high precision classifiers, and introduce a simple evaluation metric called average error distance (AED) that simplifies the comparison of systems. We also perform elaborate human evaluations to compare the performance of our system against human judgments. We test and evaluate our approaches on a publicly available, specialised, annotated corpus containing 1132 evidence-based recommendations. Our rule-based approach performs exceptionally well at the automatic extraction of publication types of articles, with F-scores of up to 0.99 for high-quality publication types. For evidence quality classification, our approach obtains an accuracy of 63.84% and an AED of 0.271. The human evaluations show that the performance of our system, in terms of AED and accuracy, is comparable to the performance of humans on the same data. The experiments suggest that our structured text classification framework achieves evaluation results comparable to those of human performance. Our overall classification approach and evaluation technique are also highly portable and can be used for various evidence grading scales. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    NASA Astrophysics Data System (ADS)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  11. Automatic coronary artery segmentation based on multi-domains remapping and quantile regression in angiographies.

    PubMed

    Li, Zhixun; Zhang, Yingtao; Gong, Huiling; Li, Weimin; Tang, Xianglong

    2016-12-01

    Coronary artery disease has become the most dangerous diseases to human life. And coronary artery segmentation is the basis of computer aided diagnosis and analysis. Existing segmentation methods are difficult to handle the complex vascular texture due to the projective nature in conventional coronary angiography. Due to large amount of data and complex vascular shapes, any manual annotation has become increasingly unrealistic. A fully automatic segmentation method is necessary in clinic practice. In this work, we study a method based on reliable boundaries via multi-domains remapping and robust discrepancy correction via distance balance and quantile regression for automatic coronary artery segmentation of angiography images. The proposed method can not only segment overlapping vascular structures robustly, but also achieve good performance in low contrast regions. The effectiveness of our approach is demonstrated on a variety of coronary blood vessels compared with the existing methods. The overall segmentation performances si, fnvf, fvpf and tpvf were 95.135%, 3.733%, 6.113%, 96.268%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Proving refinement transformations using extended denotational semantics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, V.L.; Boyle, J.M.

    1996-04-01

    TAMPR is a fully automatic transformation system based on syntactic rewrites. Our approach in a correctness proof is to map the transformation into an axiomatized mathematical domain where formal (and automated) reasoning can be performed. This mapping is accomplished via an extended denotational semantic paradigm. In this approach, the abstract notion of a program state is distributed between an environment function and a store function. Such a distribution introduces properties that go beyond the abstract state that is being modeled. The reasoning framework needs to be aware of these properties in order to successfully complete a correctness proof. This papermore » discusses some of our experiences in proving the correctness of TAMPR transformations.« less

  13. An efficient and accurate framework for calculating lattice thermal conductivity of solids: AFLOW—AAPL Automatic Anharmonic Phonon Library

    NASA Astrophysics Data System (ADS)

    Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Carrete, Jesús; Toher, Cormac; de Jong, Maarten; Asta, Mark; Fornari, Marco; Nardelli, Marco Buongiorno; Curtarolo, Stefano

    2017-10-01

    One of the most accurate approaches for calculating lattice thermal conductivity, , is solving the Boltzmann transport equation starting from third-order anharmonic force constants. In addition to the underlying approximations of ab-initio parameterization, two main challenges are associated with this path: high computational costs and lack of automation in the frameworks using this methodology, which affect the discovery rate of novel materials with ad-hoc properties. Here, the Automatic Anharmonic Phonon Library (AAPL) is presented. It efficiently computes interatomic force constants by making effective use of crystal symmetry analysis, it solves the Boltzmann transport equation to obtain , and allows a fully integrated operation with minimum user intervention, a rational addition to the current high-throughput accelerated materials development framework AFLOW. An "experiment vs. theory" study of the approach is shown, comparing accuracy and speed with respect to other available packages, and for materials characterized by strong electron localization and correlation. Combining AAPL with the pseudo-hybrid functional ACBN0 is possible to improve accuracy without increasing computational requirements.

  14. Phantom study and accuracy evaluation of an image-to-world registration approach used with electro-magnetic tracking system for neurosurgery

    NASA Astrophysics Data System (ADS)

    Li, Senhu; Sarment, David

    2015-12-01

    Minimally invasive neurosurgery needs intraoperative imaging updates and high efficient image guide system to facilitate the procedure. An automatic image guided system utilized with a compact and mobile intraoperative CT imager was introduced in this work. A tracking frame that can be easily attached onto the commercially available skull clamp was designed. With known geometry of fiducial and tracking sensor arranged on this rigid frame that was fabricated through high precision 3D printing, not only was an accurate, fully automatic registration method developed in a simple and less-costly approach, but also it helped in estimating the errors from fiducial localization in image space through image processing, and in patient space through the calibration of tracking frame. Our phantom study shows the fiducial registration error as 0.348+/-0.028mm, comparing the manual registration error as 1.976+/-0.778mm. The system in this study provided a robust and accurate image-to-patient registration without interruption of routine surgical workflow and any user interactions involved through the neurosurgery.

  15. Phase editing as a signal pre-processing step for automated bearing fault detection

    NASA Astrophysics Data System (ADS)

    Barbini, L.; Ompusunggu, A. P.; Hillis, A. J.; du Bois, J. L.; Bartic, A.

    2017-07-01

    Scheduled maintenance and inspection of bearing elements in industrial machinery contributes significantly to the operating costs. Savings can be made through automatic vibration-based damage detection and prognostics, to permit condition-based maintenance. However automation of the detection process is difficult due to the complexity of vibration signals in realistic operating environments. The sensitivity of existing methods to the choice of parameters imposes a requirement for oversight from a skilled operator. This paper presents a novel approach to the removal of unwanted vibrational components from the signal: phase editing. The approach uses a computationally-efficient full-band demodulation and requires very little oversight. Its effectiveness is tested on experimental data sets from three different test-rigs, and comparisons are made with two state-of-the-art processing techniques: spectral kurtosis and cepstral pre- whitening. The results from the phase editing technique show a 10% improvement in damage detection rates compared to the state-of-the-art while simultaneously improving on the degree of automation. This outcome represents a significant contribution in the pursuit of fully automatic fault detection.

  16. Controlling state explosion during automatic verification of delay-insensitive and delay-constrained VLSI systems using the POM verifier

    NASA Technical Reports Server (NTRS)

    Probst, D.; Jensen, L.

    1991-01-01

    Delay-insensitive VLSI systems have a certain appeal on the ground due to difficulties with clocks; they are even more attractive in space. We answer the question, is it possible to control state explosion arising from various sources during automatic verification (model checking) of delay-insensitive systems? State explosion due to concurrency is handled by introducing a partial-order representation for systems, and defining system correctness as a simple relation between two partial orders on the same set of system events (a graph problem). State explosion due to nondeterminism (chiefly arbitration) is handled when the system to be verified has a clean, finite recurrence structure. Backwards branching is a further optimization. The heart of this approach is the ability, during model checking, to discover a compact finite presentation of the verified system without prior composition of system components. The fully-implemented POM verification system has polynomial space and time performance on traditional asynchronous-circuit benchmarks that are exponential in space and time for other verification systems. We also sketch the generalization of this approach to handle delay-constrained VLSI systems.

  17. Fully automatic lesion segmentation in breast MRI using mean-shift and graph-cuts on a region adjacency graph.

    PubMed

    McClymont, Darryl; Mehnert, Andrew; Trakic, Adnan; Kennedy, Dominic; Crozier, Stuart

    2014-04-01

    To present and evaluate a fully automatic method for segmentation (i.e., detection and delineation) of suspicious tissue in breast MRI. The method, based on mean-shift clustering and graph-cuts on a region adjacency graph, was developed and its parameters tuned using multimodal (T1, T2, DCE-MRI) clinical breast MRI data from 35 subjects (training data). It was then tested using two data sets. Test set 1 comprises data for 85 subjects (93 lesions) acquired using the same protocol and scanner system used to acquire the training data. Test set 2 comprises data for eight subjects (nine lesions) acquired using a similar protocol but a different vendor's scanner system. Each lesion was manually delineated in three-dimensions by an experienced breast radiographer to establish segmentation ground truth. The regions of interest identified by the method were compared with the ground truth and the detection and delineation accuracies quantitatively evaluated. One hundred percent of the lesions were detected with a mean of 4.5 ± 1.2 false positives per subject. This false-positive rate is nearly 50% better than previously reported for a fully automatic breast lesion detection system. The median Dice coefficient for Test set 1 was 0.76 (interquartile range, 0.17), and 0.75 (interquartile range, 0.16) for Test set 2. The results demonstrate the efficacy and accuracy of the proposed method as well as its potential for direct application across different MRI systems. It is (to the authors' knowledge) the first fully automatic method for breast lesion detection and delineation in breast MRI.

  18. Automatic 3D liver location and segmentation via convolutional neural network and graph cut.

    PubMed

    Lu, Fang; Wu, Fa; Hu, Peijun; Peng, Zhiyi; Kong, Dexing

    2017-02-01

    Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans. The proposed method consists of two main steps: (i) simultaneously liver detection and probabilistic segmentation using 3D convolutional neural network; (ii) accuracy refinement of the initial segmentation with graph cut and the previously learned probability map. The proposed approach was validated on forty CT volumes taken from two public databases MICCAI-Sliver07 and 3Dircadb1. For the MICCAI-Sliver07 test dataset, the calculated mean ratios of volumetric overlap error (VOE), relative volume difference (RVD), average symmetric surface distance (ASD), root-mean-square symmetric surface distance (RMSD) and maximum symmetric surface distance (MSD) are 5.9, 2.7 %, 0.91, 1.88 and 18.94 mm, respectively. For the 3Dircadb1 dataset, the calculated mean ratios of VOE, RVD, ASD, RMSD and MSD are 9.36, 0.97 %, 1.89, 4.15 and 33.14 mm, respectively. The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.

  19. A Review on Automatic Mammographic Density and Parenchymal Segmentation

    PubMed Central

    He, Wenda; Juette, Arne; Denton, Erika R. E.; Oliver, Arnau

    2015-01-01

    Breast cancer is the most frequently diagnosed cancer in women. However, the exact cause(s) of breast cancer still remains unknown. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective way to tackle breast cancer. There are more than 70 common genetic susceptibility factors included in the current non-image-based risk prediction models (e.g., the Gail and the Tyrer-Cuzick models). Image-based risk factors, such as mammographic densities and parenchymal patterns, have been established as biomarkers but have not been fully incorporated in the risk prediction models used for risk stratification in screening and/or measuring responsiveness to preventive approaches. Within computer aided mammography, automatic mammographic tissue segmentation methods have been developed for estimation of breast tissue composition to facilitate mammographic risk assessment. This paper presents a comprehensive review of automatic mammographic tissue segmentation methodologies developed over the past two decades and the evidence for risk assessment/density classification using segmentation. The aim of this review is to analyse how engineering advances have progressed and the impact automatic mammographic tissue segmentation has in a clinical environment, as well as to understand the current research gaps with respect to the incorporation of image-based risk factors in non-image-based risk prediction models. PMID:26171249

  20. Shared control on lunar spacecraft teleoperation rendezvous operations with large time delay

    NASA Astrophysics Data System (ADS)

    Ya-kun, Zhang; Hai-yang, Li; Rui-xue, Huang; Jiang-hui, Liu

    2017-08-01

    Teleoperation could be used in space on-orbit serving missions, such as object deorbits, spacecraft approaches, and automatic rendezvous and docking back-up systems. Teleoperation rendezvous and docking in lunar orbit may encounter bottlenecks for the inherent time delay in the communication link and the limited measurement accuracy of sensors. Moreover, human intervention is unsuitable in view of the partial communication coverage problem. To solve these problems, a shared control strategy for teleoperation rendezvous and docking is detailed. The control authority in lunar orbital maneuvers that involves two spacecraft as rendezvous and docking in the final phase was discussed in this paper. The predictive display model based on the relative dynamic equations is established to overcome the influence of the large time delay in communication link. We discuss and attempt to prove via consistent, ground-based simulations the relative merits of fully autonomous control mode (i.e., onboard computer-based), fully manual control (i.e., human-driven at the ground station) and shared control mode. The simulation experiments were conducted on the nine-degrees-of-freedom teleoperation rendezvous and docking simulation platform. Simulation results indicated that the shared control methods can overcome the influence of time delay effects. In addition, the docking success probability of shared control method was enhanced compared with automatic and manual modes.

  1. Automatic cortical segmentation in the developing brain.

    PubMed

    Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V

    2007-01-01

    The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).

  2. An effective non-rigid registration approach for ultrasound image based on "demons" algorithm.

    PubMed

    Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong; Tian, Jiawei

    2013-06-01

    Medical image registration is an important component of computer-aided diagnosis system in diagnostics, therapy planning, and guidance of surgery. Because of its low signal/noise ratio (SNR), ultrasound (US) image registration is a difficult task. In this paper, a fully automatic non-rigid image registration algorithm based on demons algorithm is proposed for registration of ultrasound images. In the proposed method, an "inertia force" derived from the local motion trend of pixels in a Moore neighborhood system is produced and integrated into optical flow equation to estimate the demons force, which is helpful to handle the speckle noise and preserve the geometric continuity of US images. In the experiment, a series of US images and several similarity measure metrics are utilized for evaluating the performance. The experimental results demonstrate that the proposed method can register ultrasound images efficiently, robust to noise, quickly and automatically.

  3. Convolutional neural networks with balanced batches for facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Battini Sönmez, Elena; Cangelosi, Angelo

    2017-03-01

    This paper considers the issue of fully automatic emotion classification on 2D faces. In spite of the great effort done in recent years, traditional machine learning approaches based on hand-crafted feature extraction followed by the classification stage failed to develop a real-time automatic facial expression recognition system. The proposed architecture uses Convolutional Neural Networks (CNN), which are built as a collection of interconnected processing elements to simulate the brain of human beings. The basic idea of CNNs is to learn a hierarchical representation of the input data, which results in a better classification performance. In this work we present a block-based CNN algorithm, which uses noise, as data augmentation technique, and builds batches with a balanced number of samples per class. The proposed architecture is a very simple yet powerful CNN, which can yield state-of-the-art accuracy on the very competitive benchmark algorithm of the Extended Cohn Kanade database.

  4. Primal/dual linear programming and statistical atlases for cartilage segmentation.

    PubMed

    Glocker, Ben; Komodakis, Nikos; Paragios, Nikos; Glaser, Christian; Tziritas, Georgios; Navab, Nassir

    2007-01-01

    In this paper we propose a novel approach for automatic segmentation of cartilage using a statistical atlas and efficient primal/dual linear programming. To this end, a novel statistical atlas construction is considered from registered training examples. Segmentation is then solved through registration which aims at deforming the atlas such that the conditional posterior of the learned (atlas) density is maximized with respect to the image. Such a task is reformulated using a discrete set of deformations and segmentation becomes equivalent to finding the set of local deformations which optimally match the model to the image. We evaluate our method on 56 MRI data sets (28 used for the model and 28 used for evaluation) and obtain a fully automatic segmentation of patella cartilage volume with an overlap ratio of 0.84 with a sensitivity and specificity of 94.06% and 99.92%, respectively.

  5. Adaptive road crack detection system by pavement classification.

    PubMed

    Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro

    2011-01-01

    This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement.

  6. Adaptive Road Crack Detection System by Pavement Classification

    PubMed Central

    Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro

    2011-01-01

    This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement. PMID:22163717

  7. Operation of the Australian Store.Synchrotron for macromolecular crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Grischa R.; Aragão, David; Mudie, Nathan J.

    2014-10-01

    The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The service automatically receives and archives raw diffraction data, related metadata and preliminary results of automated data-processing workflows. Data are able to be shared with collaborators and opened to the public. In the nine months since its deployment in August 2013, the service has handled over 22.4 TB of raw data (∼1.7 million diffraction images). Severalmore » real examples from the Australian crystallographic community are described that illustrate the advantages of the approach, which include real-time online data access and fully redundant, secure storage. Discoveries in biological sciences increasingly require multidisciplinary approaches. With this in mind, Store.Synchrotron has been developed as a component within a greater service that can combine data from other instruments at the Australian Synchrotron, as well as instruments at the Australian neutron source ANSTO. It is therefore envisaged that this will serve as a model implementation of raw data archiving and dissemination within the structural biology research community.« less

  8. TU-C-BRE-11: 3D EPID-Based in Vivo Dosimetry: A Major Step Forward Towards Optimal Quality and Safety in Radiation Oncology Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mijnheer, B; Mans, A; Olaciregui-Ruiz, I

    Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is donemore » offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.« less

  9. Fully automatic measurements of axial vertebral rotation for assessment of spinal deformity in idiopathic scoliosis

    NASA Astrophysics Data System (ADS)

    Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Vavruch, Ludvig; Tropp, Hans; Knutsson, Hans

    2013-03-01

    Reliable measurements of spinal deformities in idiopathic scoliosis are vital, since they are used for assessing the degree of scoliosis, deciding upon treatment and monitoring the progression of the disease. However, commonly used two dimensional methods (e.g. the Cobb angle) do not fully capture the three dimensional deformity at hand in scoliosis, of which axial vertebral rotation (AVR) is considered to be of great importance. There are manual methods for measuring the AVR, but they are often time-consuming and related with a high intra- and inter-observer variability. In this paper, we present a fully automatic method for estimating the AVR in images from computed tomography. The proposed method is evaluated on four scoliotic patients with 17 vertebrae each and compared with manual measurements performed by three observers using the standard method by Aaro-Dahlborn. The comparison shows that the difference in measured AVR between automatic and manual measurements are on the same level as the inter-observer difference. This is further supported by a high intraclass correlation coefficient (0.971-0.979), obtained when comparing the automatic measurements with the manual measurements of each observer. Hence, the provided results and the computational performance, only requiring approximately 10 to 15 s for processing an entire volume, demonstrate the potential clinical value of the proposed method.

  10. Nondestructive Vibratory Testing and Evaluation Procedure for Military Roads and Streets.

    DTIC Science & Technology

    1984-07-01

    the addition of an auto- matic data acquisition system to the instrumentation control panel. This system , presently available, would automatically ...the data used to further develop and define the basic correlations. c. Consideration be given to installing an automatic data acquisi- tion system to...glows red any time the force generator is not fully elevated. Depressing this switch will stop the automatic cycle at any point and clear all system

  11. Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.

    PubMed

    Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan

    2018-06-01

    Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Prediction of biomechanical parameters of the proximal femur using statistical appearance models and support vector regression.

    PubMed

    Fritscher, Karl; Schuler, Benedikt; Link, Thomas; Eckstein, Felix; Suhm, Norbert; Hänni, Markus; Hengg, Clemens; Schubert, Rainer

    2008-01-01

    Fractures of the proximal femur are one of the principal causes of mortality among elderly persons. Traditional methods for the determination of femoral fracture risk use methods for measuring bone mineral density. However, BMD alone is not sufficient to predict bone failure load for an individual patient and additional parameters have to be determined for this purpose. In this work an approach that uses statistical models of appearance to identify relevant regions and parameters for the prediction of biomechanical properties of the proximal femur will be presented. By using Support Vector Regression the proposed model based approach is capable of predicting two different biomechanical parameters accurately and fully automatically in two different testing scenarios.

  13. Initial clinical trial of a closed loop, fully automatic intra-aortic balloon pump.

    PubMed

    Kantrowitz, A; Freed, P S; Cardona, R R; Gage, K; Marinescu, G N; Westveld, A H; Litch, B; Suzuki, A; Hayakawa, H; Takano, T

    1992-01-01

    A new generation, closed loop, fully automatic intraaortic balloon pump (CL-IABP) system continuously optimizes diastolic augmentation by adjusting balloon pump parameters beat by beat without operator intervention. In dogs in sinus rhythm and with experimentally induced arrhythmias, the new CL-IABP system provided safe, effective augmentation. To investigate the system's suitability for clinical use, 10 patients meeting standard indications for IABP were studied. The patients were pumped by the fully automatic IABP system for an average of 20 hr (range, 1-48 hr). At start-up, the system optimized pumping parameters within 7-20 sec. Evaluation of 186 recordings made at hourly intervals showed that inflation began within 20 msec of the dicrotic notch 99% of the time. In 100% of the recordings, deflation straddled the first half of ventricular ejection. Peak pressure across the balloon membrane averaged 55 mmHg and, in no case, exceeded 100 mmHg. Examination of the data showed that as soon as the system was actuated it provided consistently beneficial diastolic augmentation without any further operator intervention. Eight patients improved and two died (one of irreversible cardiogenic shock and one of ischemic cardiomyopathy). No complications were attributable to the investigational aspects of the system. A fully automated IABP is feasible in the clinical setting, and it may have advantages relative to current generation IABP systems.

  14. A Customized Attention-Based Long Short-Term Memory Network for Distant Supervised Relation Extraction.

    PubMed

    He, Dengchao; Zhang, Hongjun; Hao, Wenning; Zhang, Rui; Cheng, Kai

    2017-07-01

    Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.

  15. Operation of the Australian Store.Synchrotron for macromolecular crystallography

    PubMed Central

    Meyer, Grischa R.; Aragão, David; Mudie, Nathan J.; Caradoc-Davies, Tom T.; McGowan, Sheena; Bertling, Philip J.; Groenewegen, David; Quenette, Stevan M.; Bond, Charles S.; Buckle, Ashley M.; Androulakis, Steve

    2014-01-01

    The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The service automatically receives and archives raw diffraction data, related metadata and preliminary results of automated data-processing workflows. Data are able to be shared with collaborators and opened to the public. In the nine months since its deployment in August 2013, the service has handled over 22.4 TB of raw data (∼1.7 million diffraction images). Several real examples from the Australian crystallographic community are described that illustrate the advantages of the approach, which include real-time online data access and fully redundant, secure storage. Discoveries in biological sciences increasingly require multidisciplinary approaches. With this in mind, Store.Synchrotron has been developed as a component within a greater service that can combine data from other instruments at the Australian Synchrotron, as well as instruments at the Australian neutron source ANSTO. It is therefore envisaged that this will serve as a model implementation of raw data archiving and dissemination within the structural biology research community. PMID:25286837

  16. Superpixel Cut for Figure-Ground Image Segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Michael Ying; Rosenhahn, Bodo

    2016-06-01

    Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.

  17. Operation of the Australian Store.Synchrotron for macromolecular crystallography.

    PubMed

    Meyer, Grischa R; Aragão, David; Mudie, Nathan J; Caradoc-Davies, Tom T; McGowan, Sheena; Bertling, Philip J; Groenewegen, David; Quenette, Stevan M; Bond, Charles S; Buckle, Ashley M; Androulakis, Steve

    2014-10-01

    The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The service automatically receives and archives raw diffraction data, related metadata and preliminary results of automated data-processing workflows. Data are able to be shared with collaborators and opened to the public. In the nine months since its deployment in August 2013, the service has handled over 22.4 TB of raw data (∼1.7 million diffraction images). Several real examples from the Australian crystallographic community are described that illustrate the advantages of the approach, which include real-time online data access and fully redundant, secure storage. Discoveries in biological sciences increasingly require multidisciplinary approaches. With this in mind, Store.Synchrotron has been developed as a component within a greater service that can combine data from other instruments at the Australian Synchrotron, as well as instruments at the Australian neutron source ANSTO. It is therefore envisaged that this will serve as a model implementation of raw data archiving and dissemination within the structural biology research community.

  18. Recent Research on the Automated Mass Measuring System

    NASA Astrophysics Data System (ADS)

    Yao, Hong; Ren, Xiao-Ping; Wang, Jian; Zhong, Rui-Lin; Ding, Jing-An

    The research development of robotic measurement system as well as the representative automatic system were introduced in the paper, and then discussed a sub-multiple calibration scheme adopted on a fully-automatic CCR10 system effectively. Automatic robot system can be able to perform the dissemination of the mass scale without any manual intervention as well as the fast speed calibration of weight samples against a reference weight. At the last, evaluation of the expanded uncertainty was given out.

  19. Modeling and simulation of soft sensor design for real-time speed estimation, measurement and control of induction motor.

    PubMed

    Etien, Erik

    2013-05-01

    This paper deals with the design of a speed soft sensor for induction motor. The sensor is based on the physical model of the motor. Because the validation step highlight the fact that the sensor cannot be validated for all the operating points, the model is modified in order to obtain a fully validated sensor in the whole speed range. An original feature of the proposed approach is that the modified model is derived from stability analysis using automatic control theory. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  1. The Use of Opto-Electronics in Viscometry.

    ERIC Educational Resources Information Center

    Mazza, R. J.; Washbourn, D. H.

    1982-01-01

    Describes a semi-automatic viscometer which incorporates a microprocessor system and uses optoelectronics to detect flow of liquid through the capillary, flow time being displayed on a timer with accuracy of 0.01 second. The system could be made fully automatic with an additional microprocessor circuit and inclusion of a pump. (Author/JN)

  2. INFORMATION STORAGE AND RETRIEVAL, REPORTS ON EVALUATION PROCEDURES AND RESULTS 1965-1967.

    ERIC Educational Resources Information Center

    SALTON, GERALD

    A DETAILED ANALYSIS OF THE RETRIEVAL EVALUATION RESULTS OBTAINED WITH THE AUTOMATIC SMART DOCUMENT RETRIEVAL SYSTEM FOR DOCUMENT COLLECTIONS IN THE FIELDS OF AERODYNAMICS, COMPUTER SCIENCE, AND DOCUMENTATION IS GIVEN IN THIS REPORT. THE VARIOUS COMPONENTS OF FULLY AUTOMATIC DOCUMENT RETRIEVAL SYSTEMS ARE DISCUSSED IN DETAIL, INCLUDING THE FORMS OF…

  3. Atlas-based fuzzy connectedness segmentation and intensity nonuniformity correction applied to brain MRI.

    PubMed

    Zhou, Yongxin; Bai, Jing

    2007-01-01

    A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.

  4. Fully integrated low-noise readout circuit with automatic offset cancellation loop for capacitive microsensors.

    PubMed

    Song, Haryong; Park, Yunjong; Kim, Hyungseup; Cho, Dong-Il Dan; Ko, Hyoungho

    2015-10-14

    Capacitive sensing schemes are widely used for various microsensors; however, such microsensors suffer from severe parasitic capacitance problems. This paper presents a fully integrated low-noise readout circuit with automatic offset cancellation loop (AOCL) for capacitive microsensors. The output offsets of the capacitive sensing chain due to the parasitic capacitances and process variations are automatically removed using AOCL. The AOCL generates electrically equivalent offset capacitance and enables charge-domain fine calibration using a 10-bit R-2R digital-to-analog converter, charge-transfer switches, and a charge-storing capacitor. The AOCL cancels the unwanted offset by binary-search algorithm based on 10-bit successive approximation register (SAR) logic. The chip is implemented using 0.18 μm complementary metal-oxide-semiconductor (CMOS) process with an active area of 1.76 mm². The power consumption is 220 μW with 3.3 V supply. The input parasitic capacitances within the range of -250 fF to 250 fF can be cancelled out automatically, and the required calibration time is lower than 10 ms.

  5. Fully Integrated Low-Noise Readout Circuit with Automatic Offset Cancellation Loop for Capacitive Microsensors

    PubMed Central

    Song, Haryong; Park, Yunjong; Kim, Hyungseup; Cho, Dong-il Dan; Ko, Hyoungho

    2015-01-01

    Capacitive sensing schemes are widely used for various microsensors; however, such microsensors suffer from severe parasitic capacitance problems. This paper presents a fully integrated low-noise readout circuit with automatic offset cancellation loop (AOCL) for capacitive microsensors. The output offsets of the capacitive sensing chain due to the parasitic capacitances and process variations are automatically removed using AOCL. The AOCL generates electrically equivalent offset capacitance and enables charge-domain fine calibration using a 10-bit R-2R digital-to-analog converter, charge-transfer switches, and a charge-storing capacitor. The AOCL cancels the unwanted offset by binary-search algorithm based on 10-bit successive approximation register (SAR) logic. The chip is implemented using 0.18 μm complementary metal-oxide-semiconductor (CMOS) process with an active area of 1.76 mm2. The power consumption is 220 μW with 3.3 V supply. The input parasitic capacitances within the range of −250 fF to 250 fF can be cancelled out automatically, and the required calibration time is lower than 10 ms. PMID:26473877

  6. Automatic bone detection and soft tissue aware ultrasound-CT registration for computer-aided orthopedic surgery.

    PubMed

    Wein, Wolfgang; Karamalis, Athanasios; Baumgartner, Adrian; Navab, Nassir

    2015-06-01

    The transfer of preoperative CT data into the tracking system coordinates within an operating room is of high interest for computer-aided orthopedic surgery. In this work, we introduce a solution for intra-operative ultrasound-CT registration of bones. We have developed methods for fully automatic real-time bone detection in ultrasound images and global automatic registration to CT. The bone detection algorithm uses a novel bone-specific feature descriptor and was thoroughly evaluated on both in-vivo and ex-vivo data. A global optimization strategy aligns the bone surface, followed by a soft tissue aware intensity-based registration to provide higher local registration accuracy. We evaluated the system on femur, tibia and fibula anatomy in a cadaver study with human legs, where magnetically tracked bone markers were implanted to yield ground truth information. An overall median system error of 3.7 mm was achieved on 11 datasets. Global and fully automatic registration of bones aquired with ultrasound to CT is feasible, with bone detection and tracking operating in real time for immediate feedback to the surgeon.

  7. Application of Artificial Intelligence to Improve Aircraft Survivability.

    DTIC Science & Technology

    1985-12-01

    may be as smooth and effective as possible. 3. Fully Automatic Digital Engine Control ( FADEC ) Under development at the Naval Weapons Center, a major...goal of the FADEC program is to significantly reduce engine vulnerability by fully automating the regulation of engine controls. Given a thrust

  8. Electrical treatment of atrial arrhythmias in heart failure patients implanted with a dual defibrillator CRT device. Results from the TRADE-HF study.

    PubMed

    Botto, Giovanni Luca; Padeletti, Luigi; Covino, Gregorio; Pieragnoli, Paolo; Liccardo, Mattia; Mariconti, Barbara; Favale, Stefano; Molon, Giulio; De Filippo, Paolo; Bolognese, Leonardo; Landolina, Maurizio; Raciti, Giovanni; Boriani, Giuseppe

    2017-06-01

    Ventricular and atrial arrhythmias commonly occur in heart failure patients and are a significant source of symptoms, morbidity and mortality. Some specific generators referred to as dual defibrillators, Dual CRT-Ds, have the ability to treat atrial and ventricular arrhythmias. TRADE-HF is a prospective two-arm randomized study aimed at assessing the benefits of complete automatic management of atrial arrhythmias in patients implanted with a dual CRT-D. Primary objective of the TRADE-HF study was to document reduction of unplanned hospital admission for cardiac reasons or death for cardiovascular causes or progression to permanent AF, by comparing fully-automatic device driven therapy for atrial tachycardia or fibrillation (AT/AF) to an in-hospital approach for treatment of symptomatic AT/AF. Randomized Patients were followed every 6months for 3years to assess the primary objective. Four-hundred-twenty patients have been enrolled in the study. At the end of the study 30 subjects died for cardiovascular causes, 60 had at least one hospitalization for cardiovascular causes and 14 developed permanent AF. Eighty-seven patients experienced a composite event. Hazard Ratio for device-managed automatic therapy arm compared to traditional was 0.987 (95% CI: 0.684-1.503; p=0.951). The primary endpoint analysis resulted in no difference between the device managed and in-hospital treatment arm. The TRADE-HF study failed to demonstrate a reduction in the composite of unplanned hospitalizations for cardiovascular causes or death for cardiovascular causes or progression to permanent AF using automatic atrial therapy compared to a traditional approach including hospitalization for symptomatic episodes and/or in-hospital treatment of AT/AF. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Ultrasound image-based thyroid nodule automatic segmentation using convolutional neural networks.

    PubMed

    Ma, Jinlian; Wu, Fa; Jiang, Tian'an; Zhao, Qiyu; Kong, Dexing

    2017-11-01

    Delineation of thyroid nodule boundaries from ultrasound images plays an important role in calculation of clinical indices and diagnosis of thyroid diseases. However, it is challenging for accurate and automatic segmentation of thyroid nodules because of their heterogeneous appearance and components similar to the background. In this study, we employ a deep convolutional neural network (CNN) to automatically segment thyroid nodules from ultrasound images. Our CNN-based method formulates a thyroid nodule segmentation problem as a patch classification task, where the relationship among patches is ignored. Specifically, the CNN used image patches from images of normal thyroids and thyroid nodules as inputs and then generated the segmentation probability maps as outputs. A multi-view strategy is used to improve the performance of the CNN-based model. Additionally, we compared the performance of our approach with that of the commonly used segmentation methods on the same dataset. The experimental results suggest that our proposed method outperforms prior methods on thyroid nodule segmentation. Moreover, the results show that the CNN-based model is able to delineate multiple nodules in thyroid ultrasound images accurately and effectively. In detail, our CNN-based model can achieve an average of the overlap metric, dice ratio, true positive rate, false positive rate, and modified Hausdorff distance as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text] on overall folds, respectively. Our proposed method is fully automatic without any user interaction. Quantitative results also indicate that our method is so efficient and accurate that it can be good enough to replace the time-consuming and tedious manual segmentation approach, demonstrating the potential clinical applications.

  10. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  11. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks.

    PubMed

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2017-11-05

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.

  12. Convolutional neural networks for an automatic classification of prostate tissue slides with high-grade Gleason score

    NASA Astrophysics Data System (ADS)

    Jiménez del Toro, Oscar; Atzori, Manfredo; Otálora, Sebastian; Andersson, Mats; Eurén, Kristian; Hedlund, Martin; Rönnquist, Peter; Müller, Henning

    2017-03-01

    The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional neural networks are a promising approach for the automatic classification of histopathology images and can hierarchically learn subtle visual features from the data. However, a large number of manual annotations from pathologists are commonly required to obtain sufficient statistical generalization when training new models that can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects prostatectomy WSIs with high-grade Gleason score is proposed. We evaluate the performance of various deep learning architectures training them with patches extracted from automatically generated regions-of-interest rather than from manually segmented ones. Relevant parameters for training the deep learning model such as size and number of patches as well as the inclusion or not of data augmentation are compared between the tested deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with different Gleason grades in a 2-class decision: high vs. low Gleason grade. Grades 7-8, which represent the boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data sets with straightforward re-training of the model to include data from multiple sources, scanners and acquisition techniques. Automatically generated heatmaps for theWSIs could be useful for improving the selection of patches when training networks for big data sets and to guide the visual inspection of these images.

  13. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  14. An artificial intelligence approach to classify and analyse EEG traces.

    PubMed

    Castellaro, C; Favaro, G; Castellaro, A; Casagrande, A; Castellaro, S; Puthenparampil, D V; Salimbeni, C Fattorello

    2002-06-01

    We present a fully automatic system for the classification and analysis of adult electroencephalograms (EEGs). The system is based on an artificial neural network which classifies the single epochs of trace, and on an Expert System (ES) which studies the time and space correlation among the outputs of the neural network; compiling a final report. On the last 2000 EEGs representing different kinds of alterations according to clinical occurrences, the system was able to produce 80% good or very good final comments and 18% sufficient comments, which represent the documents delivered to the patient. In the remaining 2% the automatic comment needed some modifications prior to be presented to the patient. No clinical false-negative classifications did arise, i.e. no altered traces were classified as 'normal' by the neural network. The analysis method we describe is based on the interpretation of objective measures performed on the trace. It can improve the quality and reliability of the EEG exam and appears useful for the EEG medical reports although it cannot totally substitute the medical doctor who should now read the automatic EEG analysis in light of the patient's history and age.

  15. Phase II modification of the Water Availability Tool for Environmental Resources (WATER) for Kentucky: The sinkhole-drainage process, point-and-click basin delineation, and results of karst test-basin simulations

    USGS Publications Warehouse

    Taylor, Charles J.; Williamson, Tanja N.; Newson, Jeremy K.; Ulery, Randy L.; Nelson, Hugh L.; Cinotto, Peter J.

    2012-01-01

    This report describes Phase II modifications made to the Water Availability Tool for Environmental Resources (WATER), which applies the process-based TOPMODEL approach to simulate or predict stream discharge in surface basins in the Commonwealth of Kentucky. The previous (Phase I) version of WATER did not provide a means of identifying sinkhole catchments or accounting for the effects of karst (internal) drainage in a TOPMODEL-simulated basin. In the Phase II version of WATER, sinkhole catchments are automatically identified and delineated as internally drained subbasins, and a modified TOPMODEL approach (called the sinkhole drainage process, or SDP-TOPMODEL) is applied that calculates mean daily discharges for the basin based on summed area-weighted contributions from sinkhole drain-age (SD) areas and non-karstic topographically drained (TD) areas. Results obtained using the SDP-TOPMODEL approach were evaluated for 12 karst test basins located in each of the major karst terrains in Kentucky. Visual comparison of simulated hydrographs and flow-duration curves, along with statistical measures applied to the simulated discharge data (bias, correlation, root mean square error, and Nash-Sutcliffe efficiency coefficients), indicate that the SDPOPMODEL approach provides acceptably accurate estimates of discharge for most flow conditions and typically provides more accurate simulation of stream discharge in karstic basins compared to the standard TOPMODEL approach. Additional programming modifications made to the Phase II version of WATER included implementation of a point-and-click graphical user interface (GUI), which fully automates the delineation of simulation-basin boundaries and improves the speed of input-data processing. The Phase II version of WATER enables the user to select a pour point anywhere on a stream reach of interest, and the program will automatically delineate all upstream areas that contribute drainage to that point. This capability enables automatic delineation of a simulation basin of any size (area) and having any level of stream-network complexity. WATER then automatically identifies the presence of sinkholes catchments within the simulation basin boundaries; extracts and compiles the necessary climatic, topographic, and basin characteristics datasets; and runs the SDP-TOPMODEL approach to estimate daily mean discharges (streamflow).

  16. Automatic segmentation for detecting uterine fibroid regions treated with MR-guided high intensity focused ultrasound (MR-HIFU).

    PubMed

    Antila, Kari; Nieminen, Heikki J; Sequeiros, Roberto Blanco; Ehnholm, Gösta

    2014-07-01

    Up to 25% of women suffer from uterine fibroids (UF) that cause infertility, pain, and discomfort. MR-guided high intensity focused ultrasound (MR-HIFU) is an emerging technique for noninvasive, computer-guided thermal ablation of UFs. The volume of induced necrosis is a predictor of the success of the treatment. However, accurate volume assessment by hand can be time consuming, and quick tools produce biased results. Therefore, fast and reliable tools are required in order to estimate the technical treatment outcome during the therapy event so as to predict symptom relief. A novel technique has been developed for the segmentation and volume assessment of the treated region. Conventional algorithms typically require user interaction ora priori knowledge of the target. The developed algorithm exploits the treatment plan, the coordinates of the intended ablation, for fully automatic segmentation with no user input. A good similarity to an expert-segmented manual reference was achieved (Dice similarity coefficient = 0.880 ± 0.074). The average automatic segmentation time was 1.6 ± 0.7 min per patient against an order of tens of minutes when done manually. The results suggest that the segmentation algorithm developed, requiring no user-input, provides a feasible and practical approach for the automatic evaluation of the boundary and volume of the HIFU-treated region.

  17. Automatic differential analysis of NMR experiments in complex samples.

    PubMed

    Margueritte, Laure; Markov, Petar; Chiron, Lionel; Starck, Jean-Philippe; Vonthron-Sénécheau, Catherine; Bourjot, Mélanie; Delsuc, Marc-André

    2018-06-01

    Liquid state nuclear magnetic resonance (NMR) is a powerful tool for the analysis of complex mixtures of unknown molecules. This capacity has been used in many analytical approaches: metabolomics, identification of active compounds in natural extracts, and characterization of species, and such studies require the acquisition of many diverse NMR measurements on series of samples. Although acquisition can easily be performed automatically, the number of NMR experiments involved in these studies increases very rapidly, and this data avalanche requires to resort to automatic processing and analysis. We present here a program that allows the autonomous, unsupervised processing of a large corpus of 1D, 2D, and diffusion-ordered spectroscopy experiments from a series of samples acquired in different conditions. The program provides all the signal processing steps, as well as peak-picking and bucketing of 1D and 2D spectra, the program and its components are fully available. In an experiment mimicking the search of a bioactive species in a natural extract, we use it for the automatic detection of small amounts of artemisinin added to a series of plant extracts and for the generation of the spectral fingerprint of this molecule. This program called Plasmodesma is a novel tool that should be useful to decipher complex mixtures, particularly in the discovery of biologically active natural products from plants extracts but can also in drug discovery or metabolomics studies. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Manifold learning for automatically predicting articular cartilage morphology in the knee with data from the osteoarthritis initiative (OAI)

    NASA Astrophysics Data System (ADS)

    Donoghue, C.; Rao, A.; Bull, A. M. J.; Rueckert, D.

    2011-03-01

    Osteoarthritis (OA) is a degenerative, debilitating disease with a large socio-economic impact. This study looks to manifold learning as an automatic approach to harness the plethora of data provided by the Osteoarthritis Initiative (OAI). We construct several Laplacian Eigenmap embeddings of articular cartilage appearance from MR images of the knee using multiple MR sequences. A region of interest (ROI) defined as the weight bearing medial femur is automatically located in all images through non-rigid registration. A pairwise intensity based similarity measure is computed between all images, resulting in a fully connected graph, where each vertex represents an image and the weight of edges is the similarity measure. Spectral analysis is then applied to these pairwise similarities, which acts to reduce the dimensionality non-linearly and embeds these images in a manifold representation. In the manifold space, images that are close to each other are considered to be more "similar" than those far away. In the experiment presented here we use manifold learning to automatically predict the morphological changes in the articular cartilage by using the co-ordinates of the images in the manifold as independent variables for multiple linear regression. In the study presented here five manifolds are generated from five sequences of 390 distinct knees. We find statistically significant correlations (up to R2 = 0.75), between our predictors and the results presented in the literature.

  19. Automatic 3D reconstruction of electrophysiology catheters from two-view monoplane C-arm image sequences.

    PubMed

    Baur, Christoph; Milletari, Fausto; Belagiannis, Vasileios; Navab, Nassir; Fallavollita, Pascal

    2016-07-01

    Catheter guidance is a vital task for the success of electrophysiology interventions. It is usually provided through fluoroscopic images that are taken intra-operatively. The cardiologists, who are typically equipped with C-arm systems, scan the patient from multiple views rotating the fluoroscope around one of its axes. The resulting sequences allow the cardiologists to build a mental model of the 3D position of the catheters and interest points from the multiple views. We describe and compare different 3D catheter reconstruction strategies and ultimately propose a novel and robust method for the automatic reconstruction of 3D catheters in non-synchronized fluoroscopic sequences. This approach does not purely rely on triangulation but incorporates prior knowledge about the catheters. In conjunction with an automatic detection method, we demonstrate the performance of our method compared to ground truth annotations. In our experiments that include 20 biplane datasets, we achieve an average reprojection error of 0.43 mm and an average reconstruction error of 0.67 mm compared to gold standard annotation. In clinical practice, catheters suffer from complex motion due to the combined effect of heartbeat and respiratory motion. As a result, any 3D reconstruction algorithm via triangulation is imprecise. We have proposed a new method that is fully automatic and highly accurate to reconstruct catheters in three dimensions.

  20. Automatic segmentation of abdominal organs and adipose tissue compartments in water-fat MRI: Application to weight-loss in obesity.

    PubMed

    Shen, Jun; Baum, Thomas; Cordes, Christian; Ott, Beate; Skurk, Thomas; Kooijman, Hendrik; Rummeny, Ernst J; Hauner, Hans; Menze, Bjoern H; Karampinos, Dimitrios C

    2016-09-01

    To develop a fully automatic algorithm for abdominal organs and adipose tissue compartments segmentation and to assess organ and adipose tissue volume changes in longitudinal water-fat magnetic resonance imaging (MRI) data. Axial two-point Dixon images were acquired in 20 obese women (age range 24-65, BMI 34.9±3.8kg/m(2)) before and after a four-week calorie restriction. Abdominal organs, subcutaneous adipose tissue (SAT) compartments (abdominal, anterior, posterior), SAT regions along the feet-head direction and regional visceral adipose tissue (VAT) were assessed by a fully automatic algorithm using morphological operations and a multi-atlas-based segmentation method. The accuracy of organ segmentation represented by Dice coefficients ranged from 0.672±0.155 for the pancreas to 0.943±0.023 for the liver. Abdominal SAT changes were significantly greater in the posterior than the anterior SAT compartment (-11.4%±5.1% versus -9.5%±6.3%, p<0.001). The loss of VAT that was not located around any organ (-16.1%±8.9%) was significantly greater than the loss of VAT 5cm around liver, left and right kidney, spleen, and pancreas (p<0.05). The presented fully automatic algorithm showed good performance in abdominal adipose tissue and organ segmentation, and allowed the detection of SAT and VAT subcompartments changes during weight loss. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Shape based segmentation of MRIs of the bones in the knee using phase and intensity information

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Bourgeat, Pierrick; Crozier, Stuart; Ourselin, Sébastien

    2007-03-01

    The segmentation of the bones from MR images is useful for performing subsequent segmentation and quantitative measurements of cartilage tissue. In this paper, we present a shape based segmentation scheme for the bones that uses texture features derived from the phase and intensity information in the complex MR image. The phase can provide additional information about the tissue interfaces, but due to the phase unwrapping problem, this information is usually discarded. By using a Gabor filter bank on the complex MR image, texture features (including phase) can be extracted without requiring phase unwrapping. These texture features are then analyzed using a support vector machine classifier to obtain probability tissue matches. The segmentation of the bone is fully automatic and performed using a 3D active shape model based approach driven using gradient and texture information. The 3D active shape model is automatically initialized using a robust affine registration. The approach is validated using a database of 18 FLASH MR images that are manually segmented, with an average segmentation overlap (Dice similarity coefficient) of 0.92 compared to 0.9 obtained using the classifier only.

  2. Representation and Reconconstruction of Triangular Irregular Networks with Vertical Walls

    NASA Astrophysics Data System (ADS)

    Gorte, B.; Lesparre, J.

    2012-06-01

    Point clouds obtained by aerial laser scanning are a convenient input source for high resolution 2.5d elevation models, such as the Dutch AHN-2. More challenging is the fully automatic reconstruction of 3d city models. An actual demand for a combined 2.5d terrain and 3d city model for an urban hydrology application led to the design of an extension to the well-known Delaunay triangulated irregular networks (TINs) as to accommodate vertical walls. In addition we introduce methods to generate and refine models adhering to our data structure. These are based on combining two approaches: a representation of the TIN using stars of vertices and triangles, together with segmenting the TIN on the basis of coplanarity of adjacent triangles. The approach is supposed to deliver the complete model including walls at the correct locations, without relying on additional map data, as these often lack completeness, actuality and accuracy, and moreover most of the time do not account for parts facades not going down to street level. However, automatic detection of height discontinuities to obtain the exact location of the walls is currently still under implementation.

  3. A Scalable Framework For Segmenting Magnetic Resonance Images

    PubMed Central

    Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar

    2009-01-01

    A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893

  4. Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data

    NASA Astrophysics Data System (ADS)

    Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.

    2015-04-01

    In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.

  5. Automatic localization of the da Vinci surgical instrument tips in 3-D transrectal ultrasound.

    PubMed

    Mohareri, Omid; Ramezani, Mahdi; Adebar, Troy K; Abolmaesumi, Purang; Salcudean, Septimiu E

    2013-09-01

    Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical system is the current state-of-the-art treatment option for clinically confined prostate cancer. Given the limited field of view of the surgical site in RALRP, several groups have proposed the integration of transrectal ultrasound (TRUS) imaging in the surgical workflow to assist with accurate resection of the prostate and the sparing of the neurovascular bundles (NVBs). We previously introduced a robotic TRUS manipulator and a method for automatically tracking da Vinci surgical instruments with the TRUS imaging plane, in order to facilitate the integration of intraoperative TRUS in RALRP. Rapid and automatic registration of the kinematic frames of the da Vinci surgical system and the robotic TRUS probe manipulator is a critical component of the instrument tracking system. In this paper, we propose a fully automatic registration technique based on automatic 3-D TRUS localization of robot instrument tips pressed against the air-tissue boundary anterior to the prostate. The detection approach uses a multiscale filtering technique to identify and localize surgical instrument tips in the TRUS volume, and could also be used to detect other surface fiducials in 3-D ultrasound. Experiments have been performed using a tissue phantom and two ex vivo tissue samples to show the feasibility of the proposed methods. Also, an initial in vivo evaluation of the system has been carried out on a live anaesthetized dog with a da Vinci Si surgical system and a target registration error (defined as the root mean square distance of corresponding points after registration) of 2.68 mm has been achieved. Results show this method's accuracy and consistency for automatic registration of TRUS images to the da Vinci surgical system.

  6. Fully automatic segmentation of white matter hyperintensities in MR images of the elderly.

    PubMed

    Admiraal-Behloul, F; van den Heuvel, D M J; Olofsen, H; van Osch, M J P; van der Grond, J; van Buchem, M A; Reiber, J H C

    2005-11-15

    The role of quantitative image analysis in large clinical trials is continuously increasing. Several methods are available for performing white matter hyperintensity (WMH) volume quantification. They vary in the amount of the human interaction involved. In this paper, we describe a fully automatic segmentation that was used to quantify WMHs in a large clinical trial on elderly subjects. Our segmentation method combines information from 3 different MR images: proton density (PD), T2-weighted and fluid-attenuated inversion recovery (FLAIR) images; our method uses an established artificial intelligent technique (fuzzy inference system) and does not require extensive computations. The reproducibility of the segmentation was evaluated in 9 patients who underwent scan-rescan with repositioning; an inter-class correlation coefficient (ICC) of 0.91 was obtained. The effect of differences in image resolution was tested in 44 patients, scanned with 6- and 3-mm slice thickness FLAIR images; we obtained an ICC value of 0.99. The accuracy of the segmentation was evaluated on 100 patients for whom manual delineation of WMHs was available; the obtained ICC was 0.98 and the similarity index was 0.75. Besides the fact that the approach demonstrated very high volumetric and spatial agreement with expert delineation, the software did not require more than 2 min per patient (from loading the images to saving the results) on a Pentium-4 processor (512 MB RAM).

  7. Automated red blood cells extraction from holographic images using fully convolutional neural networks.

    PubMed

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-10-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.

  8. Automated red blood cells extraction from holographic images using fully convolutional neural networks

    PubMed Central

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-01-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078

  9. Automatic chemical vapor deposition

    NASA Technical Reports Server (NTRS)

    Kennedy, B. W.

    1981-01-01

    Report reviews chemical vapor deposition (CVD) for processing integrated circuits and describes fully automatic machine for CVD. CVD proceeds at relatively low temperature, allows wide choice of film compositions (including graded or abruptly changing compositions), and deposits uniform films of controllable thickness at fairly high growth rate. Report gives overview of hardware, reactants, and temperature ranges used with CVD machine.

  10. The Study of Fingerprint Characteristics of Dayi Pu-Erh Tea Using a Fully Automatic HS-SPME/GC–MS and Combined Chemometrics Method

    PubMed Central

    Lv, Shidong; Wu, Yuanshuang; Zhou, Jiangsheng; Lian, Ming; Li, Changwen; Xu, Yongquan; Liu, Shunhang; Wang, Chao; Meng, Qingxiong

    2014-01-01

    The quality of tea is presently evaluated by the sensory assessment of professional tea tasters, however, this approach is both inconsistent and inaccurate. A more standardized and efficient method is urgently needed to objectively evaluate tea quality. In this study, the chemical fingerprint of 7 different Dayi Pu-erh tea brands and 3 different Ya'an tea brands on the market were analyzed using fully automatic headspace solid-phase microextraction (HS-SPME) combined with gas chromatography-mass spectrometry (GC–MS). A total of 78 volatiles were separated, among 75 volatiles were identified by GC–MS in seven Dayi Pu-erh teas, and the major chemical components included methoxyphenolic compounds, hydrocarbons, and alcohol compounds, such as 1,2,3-trimethoxybenzene, 1,2,4-trimethoxybenzene, 2,6,10,14-tetramethyl-pentadecane, linalool and its oxides, α-terpineol, and phytol. The overlapping ratio of peaks (ORP) of the chromatogram in the seven Dayi Pu-erh tea samples was greater than 89.55%, whereas the ORP of Ya'an tea samples was less than 79.10%. The similarity and differences of the Dayi Pu-erh tea samples were also characterized using correlation coefficient similarity and principal component analysis (PCA). The results showed that the correlation coefficient of similarity of the seven Dayi Pu-erh tea samples was greater than 0.820 and was gathered in a specific area, which showed that samples from different brands were basically the same, despite have some slightly differences of chemical indexes was found. These results showed that the GC-MS fingerprint combined with the PCA approach can be used as an effective tool for the quality assessment and control of Pu-erh tea. PMID:25551231

  11. Automatic Semantic Segmentation of Brain Gliomas from MRI Images Using a Deep Cascaded Neural Network

    PubMed Central

    Mao, Lei; Liu, Chang; Xiong, Shuyu

    2018-01-01

    Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice. PMID:29755716

  12. A fully automatic approach to register mobile mapping and airborne imagery to support the correction of platform trajectories in GNSS-denied urban areas

    NASA Astrophysics Data System (ADS)

    Jende, Phillipp; Nex, Francesco; Gerke, Markus; Vosselman, George

    2018-07-01

    Mobile Mapping (MM) solutions have become a significant extension to traditional data acquisition methods over the last years. Independently from the sensor carried by a platform, may it be laser scanners or cameras, high-resolution data postings are opposing a poor absolute localisation accuracy in urban areas due to GNSS occlusions and multipath effects. Potentially inaccurate position estimations are propagated by IMUs which are furthermore prone to drift effects. Thus, reliable and accurate absolute positioning on a par with MM's high-quality data remains an open issue. Multiple and diverse approaches have shown promising potential to mitigate GNSS errors in urban areas, but cannot achieve decimetre accuracy, require manual effort, or have limitations with respect to costs and availability. This paper presents a fully automatic approach to support the correction of MM imaging data based on correspondences with airborne nadir images. These correspondences can be employed to correct the MM platform's orientation by an adjustment solution. Unlike MM as such, aerial images do not suffer from GNSS occlusions, and their accuracy is usually verified by employing well-established methods using ground control points. However, a registration between MM and aerial images is a non-standard matching scenario, and requires several strategies to yield reliable and accurate correspondences. Scale, perspective and content strongly vary between both image sources, thus traditional feature matching methods may fail. To this end, the registration process is designed to focus on common and clearly distinguishable elements, such as road markings, manholes, or kerbstones. With a registration accuracy of about 98%, reliable tie information between MM and aerial data can be derived. Even though, the adjustment strategy is not covered in its entirety in this paper, accuracy results after adjustment will be presented. It will be shown that a decimetre accuracy is well achievable in a real data test scenario.

  13. Motor automaticity in Parkinson’s disease

    PubMed Central

    Wu, Tao; Hallett, Mark; Chan, Piu

    2017-01-01

    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson’s disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  14. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  15. An Open-Source Automated Peptide Synthesizer Based on Arduino and Python.

    PubMed

    Gali, Hariprasad

    2017-10-01

    The development of the first open-source automated peptide synthesizer, PepSy, using Arduino UNO and readily available components is reported. PepSy was primarily designed to synthesize small peptides in a relatively small scale (<100 µmol). Scripts to operate PepSy in a fully automatic or manual mode were written in Python. Fully automatic script includes functions to carry out resin swelling, resin washing, single coupling, double coupling, Fmoc deprotection, ivDde deprotection, on-resin oxidation, end capping, and amino acid/reagent line cleaning. Several small peptides and peptide conjugates were successfully synthesized on PepSy with reasonably good yields and purity depending on the complexity of the peptide.

  16. Closed-loop mechanical ventilation for lung injury: a novel physiological-feedback mode following the principles of the open lung concept.

    PubMed

    Schwaiberger, David; Pickerodt, Philipp A; Pomprapa, Anake; Tjarks, Onno; Kork, Felix; Boemke, Willehad; Francis, Roland C E; Leonhardt, Steffen; Lachmann, Burkhard

    2018-06-01

    Adherence to low tidal volume (V T ) ventilation and selected positive end-expiratory pressures are low during mechanical ventilation for treatment of the acute respiratory distress syndrome. Using a pig model of severe lung injury, we tested the feasibility and physiological responses to a novel fully closed-loop mechanical ventilation algorithm based on the "open lung" concept. Lung injury was induced by surfactant washout in pigs (n = 8). Animals were ventilated following the principles of the "open lung approach" (OLA) using a fully closed-loop physiological feedback algorithm for mechanical ventilation. Standard gas exchange, respiratory- and hemodynamic parameters were measured. Electrical impedance tomography was used to quantify regional ventilation distribution during mechanical ventilation. Automatized mechanical ventilation provided strict adherence to low V T -ventilation for 6 h in severely lung injured pigs. Using the "open lung" approach, tidal volume delivery required low lung distending pressures, increased recruitment and ventilation of dorsal lung regions and improved arterial blood oxygenation. Physiological feedback closed-loop mechanical ventilation according to the principles of the open lung concept is feasible and provides low tidal volume ventilation without human intervention. Of importance, the "open lung approach"-ventilation improved gas exchange and reduced lung driving pressures by opening atelectasis and shifting of ventilation to dorsal lung regions.

  17. A semi-automatic method for extracting thin line structures in images as rooted tree network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brazzini, Jacopo; Dillard, Scott; Soille, Pierre

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the targetmore » network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.« less

  18. Automatic Synthetic Aperture Radar based oil spill detection and performance estimation via a semi-automatic operational service benchmark.

    PubMed

    Singha, Suman; Vespe, Michele; Trieschmann, Olaf

    2013-08-15

    Today the health of ocean is in danger as it was never before mainly due to man-made pollutions. Operational activities show regular occurrence of accidental and deliberate oil spill in European waters. Since the areas covered by oil spills are usually large, satellite remote sensing particularly Synthetic Aperture Radar represents an effective option for operational oil spill detection. This paper describes the development of a fully automated approach for oil spill detection from SAR. Total of 41 feature parameters extracted from each segmented dark spot for oil spill and 'look-alike' classification and ranked according to their importance. The classification algorithm is based on a two-stage processing that combines classification tree analysis and fuzzy logic. An initial evaluation of this methodology on a large dataset has been carried out and degree of agreement between results from proposed algorithm and human analyst was estimated between 85% and 93% respectively for ENVISAT and RADARSAT. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer. This presentation will demonstrate this new technology through linear and non-linear shallow water test cases. The exceptionally simple model syntax will be highlighted and the correctness of the resulting adjoint simulations will be demonstrated using rigorous convergence tests.

  20. Neurodegenerative changes in Alzheimer's disease: a comparative study of manual, semi-automated, and fully automated assessment using MRI

    NASA Astrophysics Data System (ADS)

    Fritzsche, Klaus H.; Giesel, Frederik L.; Heimann, Tobias; Thomann, Philipp A.; Hahn, Horst K.; Pantel, Johannes; Schröder, Johannes; Essig, Marco; Meinzer, Hans-Peter

    2008-03-01

    Objective quantification of disease specific neurodegenerative changes can facilitate diagnosis and therapeutic monitoring in several neuropsychiatric disorders. Reproducibility and easy-to-perform assessment are essential to ensure applicability in clinical environments. Aim of this comparative study is the evaluation of a fully automated approach that assesses atrophic changes in Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI). 21 healthy volunteers (mean age 66.2), 21 patients with MCI (66.6), and 10 patients with AD (65.1) were enrolled. Subjects underwent extensive neuropsychological testing and MRI was conducted on a 1.5 Tesla clinical scanner. Atrophic changes were measured automatically by a series of image processing steps including state of the art brain mapping techniques. Results were compared with two reference approaches: a manual segmentation of the hippocampal formation and a semi-automated estimation of temporal horn volume, which is based upon interactive selection of two to six landmarks in the ventricular system. All approaches separated controls and AD patients significantly (10 -5 < p < 10 -4) and showed a slight but not significant increase of neurodegeneration for subjects with MCI compared to volunteers. The automated approach correlated significantly with the manual (r = -0.65, p < 10 -6) and semi automated (r = -0.83, p < 10 -13) measurements. It proved high accuracy and at the same time maximized observer independency, time reduction and thus usefulness for clinical routine.

  1. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  2. Oil Spill Detection and Tracking Using Lipschitz Regularity and Multiscale Techniques in Synthetic Aperture Radar Imagery

    NASA Astrophysics Data System (ADS)

    Ajadi, O. A.; Meyer, F. J.

    2014-12-01

    Automatic oil spill detection and tracking from Synthetic Aperture Radar (SAR) images is a difficult task, due in large part to the inhomogeneous properties of the sea surface, the high level of speckle inherent in SAR data, the complexity and the highly non-Gaussian nature of amplitude information, and the low temporal sampling that is often achieved with SAR systems. This research presents a promising new oil spill detection and tracking method that is based on time series of SAR images. Through the combination of a number of advanced image processing techniques, the develop approach is able to mitigate some of these previously mentioned limitations of SAR-based oil-spill detection and enables fully automatic spill detection and tracking across a wide range of spatial scales. The method combines an initial automatic texture analysis with a consecutive change detection approach based on multi-scale image decomposition. The first step of the approach, a texture transformation of the original SAR images, is performed in order to normalize the ocean background and enhance the contrast between oil-covered and oil-free ocean surfaces. The Lipschitz regularity (LR), a local texture parameter, is used here due to its proven ability to normalize the reflectivity properties of ocean water and maximize the visibly of oil in water. To calculate LR, the images are decomposed using two-dimensional continuous wavelet transform (2D-CWT), and transformed into Holder space to measure LR. After texture transformation, the now normalized images are inserted into our multi-temporal change detection algorithm. The multi-temporal change detection approach is a two-step procedure including (1) data enhancement and filtering and (2) multi-scale automatic change detection. The performance of the developed approach is demonstrated by an application to oil spill areas in the Gulf of Mexico. In this example, areas affected by oil spills were identified from a series of ALOS PALSAR images acquired in 2010. The comparison showed exceptional performance of our method. This method can be applied to emergency management and decision support systems with a need for real-time data, and it shows great potential for rapid data analysis in other areas, including volcano detection, flood boundaries, forest health, and wildfires.

  3. On Building a Search Interface Discovery System

    NASA Astrophysics Data System (ADS)

    Shestakov, Denis

    A huge portion of the Web known as the deep Web is accessible via search interfaces to myriads of databases on the Web. While relatively good approaches for querying the contents of web databases have been recently proposed, one cannot fully utilize them having most search interfaces unlocated. Thus, the automatic recognition of search interfaces to online databases is crucial for any application accessing the deep Web. This paper describes the architecture of the I-Crawler, a system for finding and classifying search interfaces. The I-Crawler is intentionally designed to be used in the deep web characterization surveys and for constructing directories of deep web resources.

  4. Automatic Aircraft Structural Topology Generation for Multidisciplinary Optimization and Weight Estimation

    NASA Technical Reports Server (NTRS)

    Sensmeier, Mark D.; Samareh, Jamshid A.

    2005-01-01

    An approach is proposed for the application of rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process. This should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. A demonstration of this process is shown for two sample aircraft wing designs.

  5. Micro-Raman spectroscopy: The analysis of micrometer and submicrometer atmospheric aerosols

    NASA Technical Reports Server (NTRS)

    Klainer, S. M.; Milanovich, F. P.

    1985-01-01

    A nondestructive method of molecular analysis which is required to fully utilize the information contained within a collected particle is discussed. Upper atmosphere reaction mechanisms are assessed when the chemical compounds, the use of micro-Raman spectrometric techniques to perform micron and submicron particle analysis was evaluated. The results are favorable and it is concluded that micron and submicron particles can be analyzed by the micron-Raman approach. Completely automatic analysis should be possible to 0.3 micro m. No problems are anticipated with photo or thermal decomposition. Sample and impurity fluorescence are the key source of background as they cannot be completely eliminated.

  6. A Fully-Automatic Method to Segment the Carotid Artery Layers in Ultrasound Imaging: Application to Quantify the Compression-Decompression Pattern of the Intima-Media Complex During the Cardiac Cycle.

    PubMed

    Zahnd, Guillaume; Kapellas, Kostas; van Hattem, Martijn; van Dijk, Anouk; Sérusclat, André; Moulin, Philippe; van der Lugt, Aad; Skilton, Michael; Orkisz, Maciej

    2017-01-01

    The aim of this study was to introduce and evaluate a contour segmentation method to extract the interfaces of the intima-media complex in carotid B-mode ultrasound images. The method was applied to assess the temporal variation of intima-media thickness during the cardiac cycle. The main methodological contribution of the proposed approach is the introduction of an augmented dimension to process 2-D images in a 3-D space. The third dimension, which is added to the two spatial dimensions of the image, corresponds to the tentative local thickness of the intima-media complex. The method is based on a dynamic programming scheme that runs in a 3-D space generated with a shape-adapted filter bank. The optimal solution corresponds to a single medial axis representation that fully describes the two anatomical interfaces of the arterial wall. The method is fully automatic and does not require any input from the user. The method was trained on 60 subjects and validated on 184 other subjects from six different cohorts and four different medical centers. The arterial wall was successfully segmented in all analyzed images (average pixel size = 57 ± 20 mm), with average segmentation errors of 47 ± 70 mm for the lumen-intima interface, 55 ± 68 mm for the media-adventitia interface and 66 ± 90 mm for the intima-media thickness. The amplitude of the temporal variations in IMT during the cardiac cycle was significantly higher in the diseased population than in healthy volunteers (106 ± 48 vs. 86 ± 34 mm, p = 0.001). The introduced framework is a promising approach to investigate an emerging functional parameter of the arterial wall by assessing the cyclic compression-decompression pattern of the tissues. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  7. Automatic zebrafish heartbeat detection and analysis for zebrafish embryos.

    PubMed

    Pylatiuk, Christian; Sanchez, Daniela; Mikut, Ralf; Alshut, Rüdiger; Reischl, Markus; Hirth, Sofia; Rottbauer, Wolfgang; Just, Steffen

    2014-08-01

    A fully automatic detection and analysis method of heartbeats in videos of nonfixed and nonanesthetized zebrafish embryos is presented. This method reduces the manual workload and time needed for preparation and imaging of the zebrafish embryos, as well as for evaluating heartbeat parameters such as frequency, beat-to-beat intervals, and arrhythmicity. The method is validated by a comparison of the results from automatic and manual detection of the heart rates of wild-type zebrafish embryos 36-120 h postfertilization and of embryonic hearts with bradycardia and pauses in the cardiac contraction.

  8. Fully automatic segmentation of the femur from 3D-CT images using primitive shape recognition and statistical shape models.

    PubMed

    Ben Younes, Lassad; Nakajima, Yoshikazu; Saito, Toki

    2014-03-01

    Femur segmentation is well established and widely used in computer-assisted orthopedic surgery. However, most of the robust segmentation methods such as statistical shape models (SSM) require human intervention to provide an initial position for the SSM. In this paper, we propose to overcome this problem and provide a fully automatic femur segmentation method for CT images based on primitive shape recognition and SSM. Femur segmentation in CT scans was performed using primitive shape recognition based on a robust algorithm such as the Hough transform and RANdom SAmple Consensus. The proposed method is divided into 3 steps: (1) detection of the femoral head as sphere and the femoral shaft as cylinder in the SSM and the CT images, (2) rigid registration between primitives of SSM and CT image to initialize the SSM into the CT image, and (3) fitting of the SSM to the CT image edge using an affine transformation followed by a nonlinear fitting. The automated method provided good results even with a high number of outliers. The difference of segmentation error between the proposed automatic initialization method and a manual initialization method is less than 1 mm. The proposed method detects primitive shape position to initialize the SSM into the target image. Based on primitive shapes, this method overcomes the problem of inter-patient variability. Moreover, the results demonstrate that our method of primitive shape recognition can be used for 3D SSM initialization to achieve fully automatic segmentation of the femur.

  9. Fully automatic characterization and data collection from crystals of biological macromolecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander

    A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to themore » optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.« less

  10. Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.

    PubMed

    Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej

    2015-09-01

    CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.

  11. Automatic segmentation of vessels in in-vivo ultrasound scans

    NASA Astrophysics Data System (ADS)

    Tamimi-Sarnikowski, Philip; Brink-Kjær, Andreas; Moshavegh, Ramin; Arendt Jensen, Jørgen

    2017-03-01

    Ultrasound has become highly popular to monitor atherosclerosis, by scanning the carotid artery. The screening involves measuring the thickness of the vessel wall and diameter of the lumen. An automatic segmentation of the vessel lumen, can enable the determination of lumen diameter. This paper presents a fully automatic segmentation algorithm, for robustly segmenting the vessel lumen in longitudinal B-mode ultrasound images. The automatic segmentation is performed using a combination of B-mode and power Doppler images. The proposed algorithm includes a series of preprocessing steps, and performs a vessel segmentation by use of the marker-controlled watershed transform. The ultrasound images used in the study were acquired using the bk3000 ultrasound scanner (BK Ultrasound, Herlev, Denmark) with two transducers "8L2 Linear" and "10L2w Wide Linear" (BK Ultrasound, Herlev, Denmark). The algorithm was evaluated empirically and applied to a dataset of in-vivo 1770 images recorded from 8 healthy subjects. The segmentation results were compared to manual delineation performed by two experienced users. The results showed a sensitivity and specificity of 90.41+/-11.2 % and 97.93+/-5.7% (mean+/-standard deviation), respectively. The amount of overlap of segmentation and manual segmentation, was measured by the Dice similarity coefficient, which was 91.25+/-11.6%. The empirical results demonstrated the feasibility of segmenting the vessel lumen in ultrasound scans using a fully automatic algorithm.

  12. WE-AB-BRA-05: Fully Automatic Segmentation of Male Pelvic Organs On CT Without Manual Intervention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Y; Lian, J; Chen, R

    Purpose: We aim to develop a fully automatic tool for accurate contouring of major male pelvic organs in CT images for radiotherapy without any manual initialization, yet still achieving superior performance than the existing tools. Methods: A learning-based 3D deformable shape model was developed for automatic contouring. Specifically, we utilized a recent machine learning method, random forest, to jointly learn both image regressor and classifier for each organ. In particular, the image regressor is trained to predict the 3D displacement from each vertex of the 3D shape model towards the organ boundary based on the local image appearance around themore » location of this vertex. The predicted 3D displacements are then used to drive the 3D shape model towards the target organ. Once the shape model is deformed close to the target organ, it is further refined by an organ likelihood map estimated by the learned classifier. As the organ likelihood map provides good guideline for the organ boundary, the precise contouring Result could be achieved, by deforming the 3D shape model locally to fit boundaries in the organ likelihood map. Results: We applied our method to 29 previously-treated prostate cancer patients, each with one planning CT scan. Compared with manually delineated pelvic organs, our method obtains overlap ratios of 85.2%±3.74% for the prostate, 94.9%±1.62% for the bladder, and 84.7%±1.97% for the rectum, respectively. Conclusion: This work demonstrated feasibility of a novel machine-learning based approach for accurate and automatic contouring of major male pelvic organs. It shows the potential to replace the time-consuming and inconsistent manual contouring in the clinic. Also, compared with the existing works, our method is more accurate and also efficient since it does not require any manual intervention, such as manual landmark placement. Moreover, our method obtained very similar contouring results as the clinical experts. Project is partially support by a grant from NCI 1R01CA140413.« less

  13. Development of a microcontroller-based automatic control system for the electrohydraulic total artificial heart.

    PubMed

    Kim, H C; Khanwilkar, P S; Bearnson, G B; Olsen, D B

    1997-01-01

    An automatic physiological control system for the actively filled, alternately pumped ventricles of the volumetrically coupled, electrohydraulic total artificial heart (EHTAH) was developed for long-term use. The automatic control system must ensure that the device: 1) maintains a physiological response of cardiac output, 2) compensates for an nonphysiological condition, and 3) is stable, reliable, and operates at a high power efficiency. The developed automatic control system met these requirements both in vitro, in week-long continuous mock circulation tests, and in vivo, in acute open-chested animals (calves). Satisfactory results were also obtained in a series of chronic animal experiments, including 21 days of continuous operation of the fully automatic control mode, and 138 days of operation in a manual mode, in a 159-day calf implant.

  14. A fully automatic approach for multimodal PET and MR image segmentation in gamma knife treatment planning.

    PubMed

    Rundo, Leonardo; Stefano, Alessandro; Militello, Carmelo; Russo, Giorgio; Sabini, Maria Gabriella; D'Arrigo, Corrado; Marletta, Francesco; Ippolito, Massimo; Mauri, Giancarlo; Vitabile, Salvatore; Gilardi, Maria Carla

    2017-06-01

    Nowadays, clinical practice in Gamma Knife treatments is generally based on MRI anatomical information alone. However, the joint use of MRI and PET images can be useful for considering both anatomical and metabolic information about the lesion to be treated. In this paper we present a co-segmentation method to integrate the segmented Biological Target Volume (BTV), using [ 11 C]-Methionine-PET (MET-PET) images, and the segmented Gross Target Volume (GTV), on the respective co-registered MR images. The resulting volume gives enhanced brain tumor information to be used in stereotactic neuro-radiosurgery treatment planning. GTV often does not match entirely with BTV, which provides metabolic information about brain lesions. For this reason, PET imaging is valuable and it could be used to provide complementary information useful for treatment planning. In this way, BTV can be used to modify GTV, enhancing Clinical Target Volume (CTV) delineation. A novel fully automatic multimodal PET/MRI segmentation method for Leksell Gamma Knife ® treatments is proposed. This approach improves and combines two computer-assisted and operator-independent single modality methods, previously developed and validated, to segment BTV and GTV from PET and MR images, respectively. In addition, the GTV is utilized to combine the superior contrast of PET images with the higher spatial resolution of MRI, obtaining a new BTV, called BTV MRI . A total of 19 brain metastatic tumors, undergone stereotactic neuro-radiosurgery, were retrospectively analyzed. A framework for the evaluation of multimodal PET/MRI segmentation is also presented. Overlap-based and spatial distance-based metrics were considered to quantify similarity concerning PET and MRI segmentation approaches. Statistics was also included to measure correlation among the different segmentation processes. Since it is not possible to define a gold-standard CTV according to both MRI and PET images without treatment response assessment, the feasibility and the clinical value of BTV integration in Gamma Knife treatment planning were considered. Therefore, a qualitative evaluation was carried out by three experienced clinicians. The achieved experimental results showed that GTV and BTV segmentations are statistically correlated (Spearman's rank correlation coefficient: 0.898) but they have low similarity degree (average Dice Similarity Coefficient: 61.87 ± 14.64). Therefore, volume measurements as well as evaluation metrics values demonstrated that MRI and PET convey different but complementary imaging information. GTV and BTV could be combined to enhance treatment planning. In more than 50% of cases the CTV was strongly or moderately conditioned by metabolic imaging. Especially, BTV MRI enhanced the CTV more accurately than BTV in 25% of cases. The proposed fully automatic multimodal PET/MRI segmentation method is a valid operator-independent methodology helping the clinicians to define a CTV that includes both metabolic and morphologic information. BTV MRI and GTV should be considered for a comprehensive treatment planning. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A 3D THz image processing methodology for a fully integrated, semi-automatic and near real-time operational system

    NASA Astrophysics Data System (ADS)

    Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.

    2012-05-01

    The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.

  16. Automatic detection and counting of cattle in UAV imagery based on machine vision technology (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rahnemoonfar, Maryam; Foster, Jamie; Starek, Michael J.

    2017-05-01

    Beef production is the main agricultural industry in Texas, and livestock are managed in pasture and rangeland which are usually huge in size, and are not easily accessible by vehicles. The current research method for livestock location identification and counting is visual observation which is very time consuming and costly. For animals on large tracts of land, manned aircraft may be necessary to count animals which is noisy and disturbs the animals, and may introduce a source of error in counts. Such manual approaches are expensive, slow and labor intensive. In this paper we study the combination of small unmanned aerial vehicle (sUAV) and machine vision technology as a valuable solution to manual animal surveying. A fixed-wing UAV fitted with GPS and digital RGB camera for photogrammetry was flown at the Welder Wildlife Foundation in Sinton, TX. Over 600 acres were flown with four UAS flights and individual photographs used to develop orthomosaic imagery. To detect animals in UAV imagery, a fully automatic technique was developed based on spatial and spectral characteristics of objects. This automatic technique can even detect small animals that are partially occluded by bushes. Experimental results in comparison to ground-truth show the effectiveness of our algorithm.

  17. Computer-aided diagnosis system: a Bayesian hybrid classification method.

    PubMed

    Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J

    2013-10-01

    A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Towards automatic patient positioning and scan planning using continuously moving table MR imaging.

    PubMed

    Koken, Peter; Dries, Sebastian P M; Keupp, Jochen; Bystrov, Daniel; Pekar, Vladimir; Börnert, Peter

    2009-10-01

    A concept is proposed to simplify patient positioning and scan planning to improve ease of use and workflow in MR. After patient preparation in front of the scanner the operator selects the anatomy of interest by a single push-button action. Subsequently, the patient table is moved automatically into the scanner, while real-time 3D isotropic low-resolution continuously moving table scout scanning is performed using patient-independent MR system settings. With a real-time organ identification process running in parallel and steering the scanner, the target anatomy can be positioned fully automatically in the scanner's sensitive volume. The desired diagnostic examination of the anatomy of interest can be planned and continued immediately using the geometric information derived from the acquired 3D data. The concept was implemented and successfully tested in vivo in 12 healthy volunteers, focusing on the liver as the target anatomy. The positioning accuracy achieved was on the order of several millimeters, which turned out to be sufficient for initial planning purposes. Furthermore, the impact of nonoptimal system settings on the positioning performance, the signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) was investigated. The present work proved the basic concept of the proposed approach as an element of future scan automation. (c) 2009 Wiley-Liss, Inc.

  19. Patient-specific semi-supervised learning for postoperative brain tumor segmentation.

    PubMed

    Meier, Raphael; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2014-01-01

    In contrast to preoperative brain tumor segmentation, the problem of postoperative brain tumor segmentation has been rarely approached so far. We present a fully-automatic segmentation method using multimodal magnetic resonance image data and patient-specific semi-supervised learning. The idea behind our semi-supervised approach is to effectively fuse information from both pre- and postoperative image data of the same patient to improve segmentation of the postoperative image. We pose image segmentation as a classification problem and solve it by adopting a semi-supervised decision forest. The method is evaluated on a cohort of 10 high-grade glioma patients, with segmentation performance and computation time comparable or superior to a state-of-the-art brain tumor segmentation method. Moreover, our results confirm that the inclusion of preoperative MR images lead to a better performance regarding postoperative brain tumor segmentation.

  20. Clustering of color map pixels: an interactive approach

    NASA Astrophysics Data System (ADS)

    Moon, Yiu Sang; Luk, Franklin T.; Yuen, K. N.; Yeung, Hoi Wo

    2003-12-01

    The demand for digital maps continues to arise as mobile electronic devices become more popular nowadays. Instead of creating the entire map from void, we may convert a scanned paper map into a digital one. Color clustering is the very first step of the conversion process. Currently, most of the existing clustering algorithms are fully automatic. They are fast and efficient but may not work well in map conversion because of the numerous ambiguous issues associated with printed maps. Here we introduce two interactive approaches for color clustering on the map: color clustering with pre-calculated index colors (PCIC) and color clustering with pre-calculated color ranges (PCCR). We also introduce a memory model that could enhance and integrate different image processing techniques for fine-tuning the clustering results. Problems and examples of the algorithms are discussed in the paper.

  1. Analysis of navigation and guidance requirements for commercial VTOL operations

    NASA Technical Reports Server (NTRS)

    Hoffman, W. C.; Zvara, J.; Hollister, W. M.

    1975-01-01

    The paper presents some results of a program undertaken to define navigation and guidance requirements for commercial VTOL operations in the takeoff, cruise, terminal and landing phases of flight in weather conditions up to and including Category III. Quantitative navigation requirements are given for the parameters range, coverage, operation near obstacles, horizontal accuracy, multiple landing aircraft, multiple pad requirements, inertial/radio-inertial requirements, reliability/redundancy, update rate, and data link requirements in all flight phases. A multi-configuration straw-man navigation and guidance system for commercial VTOL operations is presented. Operation of the system is keyed to a fully automatic approach for navigation, guidance and control, with pilot as monitor-manager. The system is a hybrid navigator using a relatively low-cost inertial sensor with DME updates and MLS in the approach/departure phases.

  2. User-customized brain computer interfaces using Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Bashashati, Hossein; Ward, Rabab K.; Bashashati, Ali

    2016-04-01

    Objective. The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject’s brain characteristics. Approach. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. Main Results. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Significance. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.

  3. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    PubMed Central

    Dâmaso, Antônio; Maciel, Paulo

    2017-01-01

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way. PMID:29113078

  4. Automatic high throughput empty ISO container verification

    NASA Astrophysics Data System (ADS)

    Chalmers, Alex

    2007-04-01

    Encouraging results are presented for the automatic analysis of radiographic images of a continuous stream of ISO containers to confirm they are truly empty. A series of image processing algorithms are described that process real-time data acquired during the actual inspection of each container and assigns each to one of the classes "empty", "not empty" or "suspect threat". This research is one step towards achieving fully automated analysis of cargo containers.

  5. Flexible Manufacturing System Handbook. Volume IV. Appendices

    DTIC Science & Technology

    1983-02-01

    and Acceptance Test(s)" on page 26 of this Proposal Request. 1.1.10 Options 1. Centralized Automatic Chip/Coolant Recovery System a. Scope The...viable, from manual- ly moving the pallet/fixture/part combinations from machine to machine to fully automatic , unmanned material handling systems , such...English. Where dimensions are shown in metric units, the English system (inch) equivalent will also be shown. Hydraulic, pneumatic , and electrical

  6. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    PubMed

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  7. Fully Automated Quantification of the Striatal Uptake Ratio of [99mTc]-TRODAT with SPECT Imaging: Evaluation of the Diagnostic Performance in Parkinson's Disease and the Temporal Regression of Striatal Tracer Uptake

    PubMed Central

    Fang, Yu-Hua Dean; Chiu, Shao-Chieh; Lu, Chin-Song; Weng, Yi-Hsin

    2015-01-01

    Purpose. We aimed at improving the existing methods for the fully automatic quantification of striatal uptake of [99mTc]-TRODAT with SPECT imaging. Procedures. A normal [99mTc]-TRODAT template was first formed based on 28 healthy controls. Images from PD patients (n = 365) and nPD subjects (28 healthy controls and 33 essential tremor patients) were spatially normalized to the normal template. We performed an inverse transform on the predefined striatal and reference volumes of interest (VOIs) and applied the transformed VOIs to the original image data to calculate the striatal-to-reference ratio (SRR). The diagnostic performance of the SRR was determined through receiver operating characteristic (ROC) analysis. Results. The SRR measured with our new and automatic method demonstrated excellent diagnostic performance with 92% sensitivity, 90% specificity, 92% accuracy, and an area under the curve (AUC) of 0.94. For the evaluation of the mean SRR and the clinical duration, a quadratic function fit the data with R 2 = 0.84. Conclusions. We developed and validated a fully automatic method for the quantification of the SRR in a large study sample. This method has an excellent diagnostic performance and exhibits a strong correlation between the mean SRR and the clinical duration in PD patients. PMID:26366413

  8. Fully Automated Quantification of the Striatal Uptake Ratio of [(99m)Tc]-TRODAT with SPECT Imaging: Evaluation of the Diagnostic Performance in Parkinson's Disease and the Temporal Regression of Striatal Tracer Uptake.

    PubMed

    Fang, Yu-Hua Dean; Chiu, Shao-Chieh; Lu, Chin-Song; Yen, Tzu-Chen; Weng, Yi-Hsin

    2015-01-01

    We aimed at improving the existing methods for the fully automatic quantification of striatal uptake of [(99m)Tc]-TRODAT with SPECT imaging. A normal [(99m)Tc]-TRODAT template was first formed based on 28 healthy controls. Images from PD patients (n = 365) and nPD subjects (28 healthy controls and 33 essential tremor patients) were spatially normalized to the normal template. We performed an inverse transform on the predefined striatal and reference volumes of interest (VOIs) and applied the transformed VOIs to the original image data to calculate the striatal-to-reference ratio (SRR). The diagnostic performance of the SRR was determined through receiver operating characteristic (ROC) analysis. The SRR measured with our new and automatic method demonstrated excellent diagnostic performance with 92% sensitivity, 90% specificity, 92% accuracy, and an area under the curve (AUC) of 0.94. For the evaluation of the mean SRR and the clinical duration, a quadratic function fit the data with R (2) = 0.84. We developed and validated a fully automatic method for the quantification of the SRR in a large study sample. This method has an excellent diagnostic performance and exhibits a strong correlation between the mean SRR and the clinical duration in PD patients.

  9. A simulator evaluation of an automatic terminal approach system

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.

    1983-01-01

    The automatic terminal approach system (ATAS) is a concept for improving the pilot/machine interface with cockpit automation. The ATAS can automatically fly a published instrument approach by using stored instrument approach data to automatically tune airplane avionics, control the airplane's autopilot, and display status information to the pilot. A piloted simulation study was conducted to determine the feasibility of an ATAS, determine pilot acceptance, and examine pilot/ATAS interaction. Seven instrument-rated pilots each flew four instrument approaches with a base-line heading select autopilot mode. The ATAS runs resulted in lower flight technical error, lower pilot workload, and fewer blunders than with the baseline autopilot. The ATAS status display enabled the pilots to maintain situational awareness during the automatic approaches. The system was well accepted by the pilots.

  10. Fully-integrated framework for the segmentation and registration of the spinal cord white and gray matter.

    PubMed

    Dupont, Sara M; De Leener, Benjamin; Taso, Manuel; Le Troter, Arnaud; Nadeau, Sylvie; Stikov, Nikola; Callot, Virginie; Cohen-Adad, Julien

    2017-04-15

    The spinal cord white and gray matter can be affected by various pathologies such as multiple sclerosis, amyotrophic lateral sclerosis or trauma. Being able to precisely segment the white and gray matter could help with MR image analysis and hence be useful in further understanding these pathologies, and helping with diagnosis/prognosis and drug development. Up to date, white/gray matter segmentation has mostly been done manually, which is time consuming, induces a bias related to the rater and prevents large-scale multi-center studies. Recently, few methods have been proposed to automatically segment the spinal cord white and gray matter. However, no single method exists that combines the following criteria: (i) fully automatic, (ii) works on various MRI contrasts, (iii) robust towards pathology and (iv) freely available and open source. In this study we propose a multi-atlas based method for the segmentation of the spinal cord white and gray matter that addresses the previous limitations. Moreover, to study the spinal cord morphology, atlas-based approaches are increasingly used. These approaches rely on the registration of a spinal cord template to an MR image, however the registration usually doesn't take into account the spinal cord internal structure and thus lacks accuracy. In this study, we propose a new template registration framework that integrates the white and gray matter segmentation to account for the specific gray matter shape of each individual subject. Validation of segmentation was performed in 24 healthy subjects using T 2 * -weighted images, in 8 healthy subjects using diffusion weighted images (exhibiting inverted white-to-gray matter contrast compared to T 2 *-weighted), and in 5 patients with spinal cord injury. The template registration was validated in 24 subjects using T 2 *-weighted data. Results of automatic segmentation on T 2 *-weighted images was in close correspondence with the manual segmentation (Dice coefficient in the white/gray matter of 0.91/0.71 respectively). Similarly, good results were obtained in data with inverted contrast (diffusion-weighted image) and in patients. When compared to the classical template registration framework, the proposed framework that accounts for gray matter shape significantly improved the quality of the registration (comparing Dice coefficient in gray matter: p=9.5×10 -6 ). While further validation is needed to show the benefits of the new registration framework in large cohorts and in a variety of patients, this study provides a fully-integrated tool for quantitative assessment of white/gray matter morphometry and template-based analysis. All the proposed methods are implemented in the Spinal Cord Toolbox (SCT), an open-source software for processing spinal cord multi-parametric MRI data. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. 3D model assisted fully automated scanning laser Doppler vibrometer measurements

    NASA Astrophysics Data System (ADS)

    Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve

    2017-12-01

    In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle

  12. Digital controllers for VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Broussard, J. R.; Berry, P. W.

    1976-01-01

    Using linear-optimal estimation and control techniques, digital-adaptive control laws have been designed for a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. Two distinct discrete-time control laws are designed to interface with velocity-command and attitude-command guidance logic, and each incorporates proportional-integral compensation for non-zero-set-point regulation, as well as reduced-order Kalman filters for sensor blending and noise rejection. Adaptation to flight condition is achieved with a novel gain-scheduling method based on correlation and regression analysis. The linear-optimal design approach is found to be a valuable tool in the development of practical multivariable control laws for vehicles which evidence significant coupling and insufficient natural stability.

  13. Rapid, Optimized Interactomic Screening

    PubMed Central

    Hakhverdyan, Zhanna; Domanski, Michal; Hough, Loren; Oroskar, Asha A.; Oroskar, Anil R.; Keegan, Sarah; Dilworth, David J.; Molloy, Kelly R.; Sherman, Vadim; Aitchison, John D.; Fenyö, David; Chait, Brian T.; Jensen, Torben Heick; Rout, Michael P.; LaCava, John

    2015-01-01

    We must reliably map the interactomes of cellular macromolecular complexes in order to fully explore and understand biological systems. However, there are no methods to accurately predict how to capture a given macromolecular complex with its physiological binding partners. Here, we present a screen that comprehensively explores the parameters affecting the stability of interactions in affinity-captured complexes, enabling the discovery of physiological binding partners and the elucidation of their functional interactions in unparalleled detail. We have implemented this screen on several macromolecular complexes from a variety of organisms, revealing novel profiles even for well-studied proteins. Our approach is robust, economical and automatable, providing an inroad to the rigorous, systematic dissection of cellular interactomes. PMID:25938370

  14. Toward fully automated processing of dynamic susceptibility contrast perfusion MRI for acute ischemic cerebral stroke.

    PubMed

    Kim, Jinsuh; Leira, Enrique C; Callison, Richard C; Ludwig, Bryan; Moritani, Toshio; Magnotta, Vincent A; Madsen, Mark T

    2010-05-01

    We developed fully automated software for dynamic susceptibility contrast (DSC) MR perfusion-weighted imaging (PWI) to efficiently and reliably derive critical hemodynamic information for acute stroke treatment decisions. Brain MR PWI was performed in 80 consecutive patients with acute nonlacunar ischemic stroke within 24h after onset of symptom from January 2008 to August 2009. These studies were automatically processed to generate hemodynamic parameters that included cerebral blood flow and cerebral blood volume, and the mean transit time (MTT). To develop reliable software for PWI analysis, we used computationally robust algorithms including the piecewise continuous regression method to determine bolus arrival time (BAT), log-linear curve fitting, arrival time independent deconvolution method and sophisticated motion correction methods. An optimal arterial input function (AIF) search algorithm using a new artery-likelihood metric was also developed. Anatomical locations of the automatically determined AIF were reviewed and validated. The automatically computed BAT values were statistically compared with estimated BAT by a single observer. In addition, gamma-variate curve-fitting errors of AIF and inter-subject variability of AIFs were analyzed. Lastly, two observes independently assessed the quality and area of hypoperfusion mismatched with restricted diffusion area from motion corrected MTT maps and compared that with time-to-peak (TTP) maps using the standard approach. The AIF was identified within an arterial branch and enhanced areas of perfusion deficit were visualized in all evaluated cases. Total processing time was 10.9+/-2.5s (mean+/-s.d.) without motion correction and 267+/-80s (mean+/-s.d.) with motion correction on a standard personal computer. The MTT map produced with our software adequately estimated brain areas with perfusion deficit and was significantly less affected by random noise of the PWI when compared with the TTP map. Results of image quality assessment by two observers revealed that the MTT maps exhibited superior quality over the TTP maps (88% good rating of MTT as compared to 68% of TTP). Our software allowed fully automated deconvolution analysis of DSC PWI using proven efficient algorithms that can be applied to acute stroke treatment decisions. Our streamlined method also offers promise for further development of automated quantitative analysis of the ischemic penumbra. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  15. Automatic Coregistration and orthorectification (ACRO) and subsequent mosaicing of NASA high-resolution imagery over the Mars MC11 quadrangle, using HRSC as a baseline

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter; Watson, Gillian; Michael, Gregory; Walter, Sebastian

    2018-02-01

    This work presents the coregistered, orthorectified and mosaiced high-resolution products of the MC11 quadrangle of Mars, which have been processed using novel, fully automatic, techniques. We discuss the development of a pipeline that achieves fully automatic and parameter independent geometric alignment of high-resolution planetary images, starting from raw input images in NASA PDS format and following all required steps to produce a coregistered geotiff image, a corresponding footprint and useful metadata. Additionally, we describe the development of a radiometric calibration technique that post-processes coregistered images to make them radiometrically consistent. Finally, we present a batch-mode application of the developed techniques over the MC11 quadrangle to validate their potential, as well as to generate end products, which are released to the planetary science community, thus assisting in the analysis of Mars static and dynamic features. This case study is a step towards the full automation of signal processing tasks that are essential to increase the usability of planetary data, but currently, require the extensive use of human resources.

  16. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  17. Determinants of wood dust exposure in the Danish furniture industry.

    PubMed

    Mikkelsen, Anders B; Schlunssen, Vivi; Sigsgaard, Torben; Schaumburg, Inger

    2002-11-01

    This paper investigates the relation between wood dust exposure in the furniture industry and occupational hygiene variables. During the winter 1997-98 54 factories were visited and 2362 personal, passive inhalable dust samples were obtained; the geometric mean was 0.95 mg/m(3) and the geometric standard deviation was 2.08. In a first measuring round 1685 dust concentrations were obtained. For some of the workers repeated measurements were carried out 1 (351) and 2 weeks (326) after the first measurement. Hygiene variables like job, exhaust ventilation, cleaning procedures, etc., were documented. A multivariate analysis based on mixed effects models was used with hygiene variables being fixed effects and worker, machine, department and factory being random effects. A modified stepwise strategy of model making was adopted taking into account the hierarchically structured variables and making possible the exclusion of non-influential random as well as fixed effects. For woodworking, the following determinants of exposure increase the dust concentration: manual and automatic sanding and use of compressed air with fully automatic and semi-automatic machines and for cleaning of work pieces. Decreased dust exposure resulted from the use of compressed air with manual machines, working at fully automatic or semi-automatic machines, functioning exhaust ventilation, work on the night shift, daily cleaning of rooms, cleaning of work pieces with a brush, vacuum cleaning of machines, supplementary fresh air intake and safety representative elected within the last 2 yr. For handling and assembling, increased exposure results from work at automatic machines and presence of wood dust on the workpieces. Work on the evening shift, supplementary fresh air intake, work in a chair factory and special cleaning staff produced decreased exposure to wood dust. The implications of the results for the prevention of wood dust exposure are discussed.

  18. A Graph-Based Recovery and Decomposition of Swanson’s Hypothesis using Semantic Predications

    PubMed Central

    Cameron, Delroy; Bodenreider, Olivier; Yalamanchili, Hima; Danh, Tu; Vallabhaneni, Sreeram; Thirunarayan, Krishnaprasad; Sheth, Amit P.; Rindflesch, Thomas C.

    2014-01-01

    Objectives This paper presents a methodology for recovering and decomposing Swanson’s Raynaud Syndrome–Fish Oil Hypothesis semi-automatically. The methodology leverages the semantics of assertions extracted from biomedical literature (called semantic predications) along with structured background knowledge and graph-based algorithms to semi-automatically capture the informative associations originally discovered manually by Swanson. Demonstrating that Swanson’s manually intensive techniques can be undertaken semi-automatically, paves the way for fully automatic semantics-based hypothesis generation from scientific literature. Methods Semantic predications obtained from biomedical literature allow the construction of labeled directed graphs which contain various associations among concepts from the literature. By aggregating such associations into informative subgraphs, some of the relevant details originally articulated by Swanson has been uncovered. However, by leveraging background knowledge to bridge important knowledge gaps in the literature, a methodology for semi-automatically capturing the detailed associations originally explicated in natural language by Swanson has been developed. Results Our methodology not only recovered the 3 associations commonly recognized as Swanson’s Hypothesis, but also decomposed them into an additional 16 detailed associations, formulated as chains of semantic predications. Altogether, 14 out of the 19 associations that can be attributed to Swanson were retrieved using our approach. To the best of our knowledge, such an in-depth recovery and decomposition of Swanson’s Hypothesis has never been attempted. Conclusion In this work therefore, we presented a methodology for semi- automatically recovering and decomposing Swanson’s RS-DFO Hypothesis using semantic representations and graph algorithms. Our methodology provides new insights into potential prerequisites for semantics-driven Literature-Based Discovery (LBD). These suggest that three critical aspects of LBD include: 1) the need for more expressive representations beyond Swanson’s ABC model; 2) an ability to accurately extract semantic information from text; and 3) the semantic integration of scientific literature with structured background knowledge. PMID:23026233

  19. Watershed-based segmentation of the corpus callosum in diffusion MRI

    NASA Astrophysics Data System (ADS)

    Freitas, Pedro; Rittner, Leticia; Appenzeller, Simone; Lapa, Aline; Lotufo, Roberto

    2012-02-01

    The corpus callosum (CC) is one of the most important white matter structures of the brain, interconnecting the two cerebral hemispheres, and is related to several neurodegenerative diseases. Since segmentation is usually the first step for studies in this structure, and manual volumetric segmentation is a very time-consuming task, it is important to have a robust automatic method for CC segmentation. We propose here an approach for fully automatic 3D segmentation of the CC in the magnetic resonance diffusion tensor images. The method uses the watershed transform and is performed on the fractional anisotropy (FA) map weighted by the projection of the principal eigenvector in the left-right direction. The section of the CC in the midsagittal slice is used as seed for the volumetric segmentation. Experiments with real diffusion MRI data showed that the proposed method is able to quickly segment the CC without any user intervention, with great results when compared to manual segmentation. Since it is simple, fast and does not require parameter settings, the proposed method is well suited for clinical applications.

  20. Modeling and segmentation of intra-cochlear anatomy in conventional CT

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Rutherford, Robert B.; Labadie, Robert F.; Majdani, Omid; Dawant, Benoit M.

    2010-03-01

    Cochlear implant surgery is a procedure performed to treat profound hearing loss. Since the cochlea is not visible in surgery, the physician uses anatomical landmarks to estimate the pose of the cochlea. Research has indicated that implanting the electrode in a particular cavity of the cochlea, the scala tympani, results in better hearing restoration. The success of the scala tympani implantation is largely dependent on the point of entry and angle of electrode insertion. Errors can occur due to the imprecise nature of landmark-based, manual navigation as well as inter-patient variations between scala tympani and the anatomical landmarks. In this work, we use point distribution models of the intra-cochlear anatomy to study the inter-patient variations between the cochlea and the typical anatomic landmarks, and we implement an active shape model technique to automatically localize intra-cochlear anatomy in conventional CT images, where intra-cochlear structures are not visible. This fully automatic segmentation could aid the surgeon to choose the point of entry and angle of approach to maximize the likelihood of scala tympani insertion, resulting in more substantial hearing restoration.

  1. iGeoT v1.0: Automatic Parameter Estimation for Multicomponent Geothermometry, User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spycher, Nicolas; Finsterle, Stefan

    GeoT implements the multicomponent geothermometry method developed by Reed and Spycher [1984] into a stand-alone computer program to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization. This integrated geothermometry approach presents advantages over classical geothermometersmore » for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss. This manual contains installation instructions for iGeoT, and briefly describes the input formats needed to run iGeoT in Automatic or Expert Mode. An example is also provided to demonstrate the use of iGeoT.« less

  2. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  3. ProbFold: a probabilistic method for integration of probing data in RNA secondary structure prediction.

    PubMed

    Sahoo, Sudhakar; Świtnicki, Michał P; Pedersen, Jakob Skou

    2016-09-01

    Recently, new RNA secondary structure probing techniques have been developed, including Next Generation Sequencing based methods capable of probing transcriptome-wide. These techniques hold great promise for improving structure prediction accuracy. However, each new data type comes with its own signal properties and biases, which may even be experiment specific. There is therefore a growing need for RNA structure prediction methods that can be automatically trained on new data types and readily extended to integrate and fully exploit multiple types of data. Here, we develop and explore a modular probabilistic approach for integrating probing data in RNA structure prediction. It can be automatically trained given a set of known structures with probing data. The approach is demonstrated on SHAPE datasets, where we evaluate and selectively model specific correlations. The approach often makes superior use of the probing data signal compared to other methods. We illustrate the use of ProbFold on multiple data types using both simulations and a small set of structures with both SHAPE, DMS and CMCT data. Technically, the approach combines stochastic context-free grammars (SCFGs) with probabilistic graphical models. This approach allows rapid adaptation and integration of new probing data types. ProbFold is implemented in C ++. Models are specified using simple textual formats. Data reformatting is done using separate C ++ programs. Source code, statically compiled binaries for x86 Linux machines, C ++ programs, example datasets and a tutorial is available from http://moma.ki.au.dk/prj/probfold/ : jakob.skou@clin.au.dk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    PubMed

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  5. Document cards: a top trumps visualization for documents.

    PubMed

    Strobelt, Hendrik; Oelke, Daniela; Rohrdantz, Christian; Stoffel, Andreas; Keim, Daniel A; Deussen, Oliver

    2009-01-01

    Finding suitable, less space consuming views for a document's main content is crucial to provide convenient access to large document collections on display devices of different size. We present a novel compact visualization which represents the document's key semantic as a mixture of images and important key terms, similar to cards in a top trumps game. The key terms are extracted using an advanced text mining approach based on a fully automatic document structure extraction. The images and their captions are extracted using a graphical heuristic and the captions are used for a semi-semantic image weighting. Furthermore, we use the image color histogram for classification and show at least one representative from each non-empty image class. The approach is demonstrated for the IEEE InfoVis publications of a complete year. The method can easily be applied to other publication collections and sets of documents which contain images.

  6. Toward Automatic Georeferencing of Archival Aerial Photogrammetric Surveys

    NASA Astrophysics Data System (ADS)

    Giordano, S.; Le Bris, A.; Mallet, C.

    2018-05-01

    Images from archival aerial photogrammetric surveys are a unique and relatively unexplored means to chronicle 3D land-cover changes over the past 100 years. They provide a relatively dense temporal sampling of the territories with very high spatial resolution. Such time series image analysis is a mandatory baseline for a large variety of long-term environmental monitoring studies. The current bottleneck for accurate comparison between epochs is their fine georeferencing step. No fully automatic method has been proposed yet and existing studies are rather limited in terms of area and number of dates. State-of-the art shows that the major challenge is the identification of ground references: cartographic coordinates and their position in the archival images. This task is manually performed, and extremely time-consuming. This paper proposes to use a photogrammetric approach, and states that the 3D information that can be computed is the key to full automation. Its original idea lies in a 2-step approach: (i) the computation of a coarse absolute image orientation; (ii) the use of the coarse Digital Surface Model (DSM) information for automatic absolute image orientation. It only relies on a recent orthoimage+DSM, used as master reference for all epochs. The coarse orthoimage, compared with such a reference, allows the identification of dense ground references and the coarse DSM provides their position in the archival images. Results on two areas and 5 dates show that this method is compatible with long and dense archival aerial image series. Satisfactory planimetric and altimetric accuracies are reported, with variations depending on the ground sampling distance of the images and the location of the Ground Control Points.

  7. Information Pre-Processing using Domain Meta-Ontology and Rule Learning System

    NASA Astrophysics Data System (ADS)

    Ranganathan, Girish R.; Biletskiy, Yevgen

    Around the globe, extraordinary amounts of documents are being created by Enterprises and by users outside these Enterprises. The documents created in the Enterprises constitute the main focus of the present chapter. These documents are used to perform numerous amounts of machine processing. While using thesedocuments for machine processing, lack of semantics of the information in these documents may cause misinterpretation of the information, thereby inhibiting the productiveness of computer assisted analytical work. Hence, it would be profitable to the Enterprises if they use well defined domain ontologies which will serve as rich source(s) of semantics for the information in the documents. These domain ontologies can be created manually, semi-automatically or fully automatically. The focus of this chapter is to propose an intermediate solution which will enable relatively easy creation of these domain ontologies. The process of extracting and capturing domain ontologies from these voluminous documents requires extensive involvement of domain experts and application of methods of ontology learning that are substantially labor intensive; therefore, some intermediate solutions which would assist in capturing domain ontologies must be developed. This chapter proposes a solution in this direction which involves building a meta-ontology that will serve as an intermediate information source for the main domain ontology. This chapter proposes a solution in this direction which involves building a meta-ontology as a rapid approach in conceptualizing a domain of interest from huge amount of source documents. This meta-ontology can be populated by ontological concepts, attributes and relations from documents, and then refined in order to form better domain ontology either through automatic ontology learning methods or some other relevant ontology building approach.

  8. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.

  9. A fully automated calibration method for an optical see-through head-mounted operating microscope with variable zoom and focus.

    PubMed

    Figl, Michael; Ede, Christopher; Hummel, Johann; Wanschitz, Felix; Ewers, Rolf; Bergmann, Helmar; Birkfellner, Wolfgang

    2005-11-01

    Ever since the development of the first applications in image-guided therapy (IGT), the use of head-mounted displays (HMDs) was considered an important extension of existing IGT technologies. Several approaches to utilizing HMDs and modified medical devices for augmented reality (AR) visualization were implemented. These approaches include video-see through systems, semitransparent mirrors, modified endoscopes, and modified operating microscopes. Common to all these devices is the fact that a precise calibration between the display and three-dimensional coordinates in the patient's frame of reference is compulsory. In optical see-through devices based on complex optical systems such as operating microscopes or operating binoculars-as in the case of the system presented in this paper-this procedure can become increasingly difficult since precise camera calibration for every focus and zoom position is required. We present a method for fully automatic calibration of the operating binocular Varioscope M5 AR for the full range of zoom and focus settings available. Our method uses a special calibration pattern, a linear guide driven by a stepping motor, and special calibration software. The overlay error in the calibration plane was found to be 0.14-0.91 mm, which is less than 1% of the field of view. Using the motorized calibration rig as presented in the paper, we were also able to assess the dynamic latency when viewing augmentation graphics on a mobile target; spatial displacement due to latency was found to be in the range of 1.1-2.8 mm maximum, the disparity between the true object and its computed overlay represented latency of 0.1 s. We conclude that the automatic calibration method presented in this paper is sufficient in terms of accuracy and time requirements for standard uses of optical see-through systems in a clinical environment.

  10. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.

    PubMed

    Hofmann, Matthias; Pichler, Bernd; Schölkopf, Bernhard; Beyer, Thomas

    2009-03-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.

  11. Fully automated, real-time 3D ultrasound segmentation to estimate first trimester placental volume using deep learning.

    PubMed

    Looney, Pádraig; Stevenson, Gordon N; Nicolaides, Kypros H; Plasencia, Walter; Molloholli, Malid; Natsis, Stavros; Collins, Sally L

    2018-06-07

    We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. Image analysis tools to estimate organ volume do exist but are too time consuming and operator dependant. Fully automating the segmentation process would potentially allow the use of placental volume to screen for increased risk of pregnancy complications. The placenta was segmented from 2,393 first trimester 3D-US volumes using a semiautomated technique. This was quality controlled by three operators to produce the "ground-truth" data set. A fully convolutional neural network (OxNNet) was trained using this ground-truth data set to automatically segment the placenta. OxNNet delivered state-of-the-art automatic segmentation. The effect of training set size on the performance of OxNNet demonstrated the need for large data sets. The clinical utility of placental volume was tested by looking at predictions of small-for-gestational-age babies at term. The receiver-operating characteristics curves demonstrated almost identical results between OxNNet and the ground-truth). Our results demonstrated good similarity to the ground-truth and almost identical clinical results for the prediction of SGA.

  12. Automatic Molar Extraction from Dental Panoramic Radiographs for Forensic Personal Identification

    NASA Astrophysics Data System (ADS)

    Samopa, Febriliyan; Asano, Akira; Taguchi, Akira

    Measurement of an individual molar provides rich information for forensic personal identification. We propose a computer-based system for extracting an individual molar from dental panoramic radiographs. A molar is obtained by extracting the region-of-interest, separating the maxilla and mandible, and extracting the boundaries between teeth. The proposed system is almost fully automatic; all that the user has to do is clicking three points on the boundary between the maxilla and the mandible.

  13. Approaches to the automatic generation and control of finite element meshes

    NASA Technical Reports Server (NTRS)

    Shephard, Mark S.

    1987-01-01

    The algorithmic approaches being taken to the development of finite element mesh generators capable of automatically discretizing general domains without the need for user intervention are discussed. It is demonstrated that because of the modeling demands placed on a automatic mesh generator, all the approaches taken to date produce unstructured meshes. Consideration is also given to both a priori and a posteriori mesh control devices for automatic mesh generators as well as their integration with geometric modeling and adaptive analysis procedures.

  14. Fully automated segmentation of the pectoralis muscle boundary in breast MR images

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Filippatos, Konstantinos; Friman, Ola; Hahn, Horst K.

    2011-03-01

    Dynamic Contrast Enhanced MRI (DCE-MRI) of the breast is emerging as a novel tool for early tumor detection and diagnosis. The segmentation of the structures in breast DCE-MR images, such as the nipple, the breast-air boundary and the pectoralis muscle, serves as a fundamental step for further computer assisted diagnosis (CAD) applications, e.g. breast density analysis. Moreover, the previous clinical studies show that the distance between the posterior breast lesions and the pectoralis muscle can be used to assess the extent of the disease. To enable automatic quantification of the distance from a breast tumor to the pectoralis muscle, a precise delineation of the pectoralis muscle boundary is required. We present a fully automatic segmentation method based on the second derivative information represented by the Hessian matrix. The voxels proximal to the pectoralis muscle boundary exhibit roughly the same Eigen value patterns as a sheet-like object in 3D, which can be enhanced and segmented by a Hessian-based sheetness filter. A vector-based connected component filter is then utilized such that only the pectoralis muscle is preserved by extracting the largest connected component. The proposed method was evaluated quantitatively with a test data set which includes 30 breast MR images by measuring the average distances between the segmented boundary and the annotated surfaces in two ground truth sets, and the statistics showed that the mean distance was 1.434 mm with the standard deviation of 0.4661 mm, which shows great potential for integration of the approach in the clinical routine.

  15. Experimental new automatic tools for robotic stereotactic neurosurgery: towards "no hands" procedure of leads implantation into a brain target.

    PubMed

    Mazzone, P; Arena, P; Cantelli, L; Spampinato, G; Sposato, S; Cozzolino, S; Demarinis, P; Muscato, G

    2016-07-01

    The use of robotics in neurosurgery and, particularly, in stereotactic neurosurgery, is becoming more and more adopted because of the great advantages that it offers. Robotic manipulators easily allow to achieve great precision, reliability, and rapidity in the positioning of surgical instruments or devices in the brain. The aim of this work was to experimentally verify a fully automatic "no hands" surgical procedure. The integration of neuroimaging to data for planning the surgery, followed by application of new specific surgical tools, permitted the realization of a fully automated robotic implantation of leads in brain targets. An anthropomorphic commercial manipulator was utilized. In a preliminary phase, a software to plan surgery was developed, and the surgical tools were tested first during a simulation and then on a skull mock-up. In such a way, several tools were developed and tested, and the basis for an innovative surgical procedure arose. The final experimentation was carried out on anesthetized "large white" pigs. The determination of stereotactic parameters for the correct planning to reach the intended target was performed with the same technique currently employed in human stereotactic neurosurgery, and the robotic system revealed to be reliable and precise in reaching the target. The results of this work strengthen the possibility that a neurosurgeon may be substituted by a machine, and may represent the beginning of a new approach in the current clinical practice. Moreover, this possibility may have a great impact not only on stereotactic functional procedures but also on the entire domain of neurosurgery.

  16. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    NASA Astrophysics Data System (ADS)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  17. Automated analysis of siRNA screens of cells infected by hepatitis C and dengue viruses based on immunofluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Matula, Petr; Kumar, Anil; Wörz, Ilka; Harder, Nathalie; Erfle, Holger; Bartenschlager, Ralf; Eils, Roland; Rohr, Karl

    2008-03-01

    We present an image analysis approach as part of a high-throughput microscopy siRNA-based screening system using cell arrays for the identification of cellular genes involved in hepatitis C and dengue virus replication. Our approach comprises: cell nucleus segmentation, quantification of virus replication level in the neighborhood of segmented cell nuclei, localization of regions with transfected cells, cell classification by infection status, and quality assessment of an experiment and single images. In particular, we propose a novel approach for the localization of regions of transfected cells within cell array images, which combines model-based circle fitting and grid fitting. By this scheme we integrate information from single cell array images and knowledge from the complete cell arrays. The approach is fully automatic and has been successfully applied to a large number of cell array images from screening experiments. The experimental results show a good agreement with the expected behaviour of positive as well as negative controls and encourage the application to screens from further high-throughput experiments.

  18. Invited review: The impact of automatic milking systems on dairy cow management, behavior, health, and welfare.

    PubMed

    Jacobs, J A; Siegford, J M

    2012-05-01

    Over the last 100 yr, the dairy industry has incorporated technology to maximize yield and profit. Pressure to maximize efficiency and lower inputs has resulted in novel approaches to managing and milking dairy herds, including implementation of automatic milking systems (AMS) to reduce labor associated with milking. Although AMS have been used for almost 20 yr in Europe, they have only recently become more popular in North America. Automatic milking systems have the potential to increase milk production by up to 12%, decrease labor by as much as 18%, and simultaneously improve dairy cow welfare by allowing cows to choose when to be milked. However, producers using AMS may not fully realize these anticipated benefits for a variety of reasons. For example, producers may not see a reduction in labor because some cows do not milk voluntarily or because they have not fully or efficiently incorporated the AMS into their management routines. Following the introduction of AMS on the market in the 1990s, research has been conducted examining AMS systems versus conventional parlors focusing primarily on cow health, milk yield, and milk quality, as well as on some of the economic and social factors related to AMS adoption. Additionally, because AMS rely on cows milking themselves voluntarily, research has also been conducted on the behavior of cows in AMS facilities, with particular attention paid to cow traffic around AMS, cow use of AMS, and cows' motivation to enter the milking stall. However, the sometimes contradictory findings resulting from different studies on the same aspect of AMS suggest that differences in management and farm-level variables may be more important to AMS efficiency and milk production than features of the milking system itself. Furthermore, some of the recommendations that have been made regarding AMS facility design and management should be scientifically tested to demonstrate their validity, as not all may work as intended. As updated AMS designs, such as the automatic rotary milking parlor, continue to be introduced to the dairy industry, research must continue to be conducted on AMS to understand the causes and consequences of differences between milking systems as well as the impacts of the different facilities and management systems that surround them on dairy cow behavior, health, and welfare. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Fully automated MR liver volumetry using watershed segmentation coupled with active contouring.

    PubMed

    Huynh, Hieu Trung; Le-Trong, Ngoc; Bao, Pham The; Oto, Aytek; Suzuki, Kenji

    2017-02-01

    Our purpose is to develop a fully automated scheme for liver volume measurement in abdominal MR images, without requiring any user input or interaction. The proposed scheme is fully automatic for liver volumetry from 3D abdominal MR images, and it consists of three main stages: preprocessing, rough liver shape generation, and liver extraction. The preprocessing stage reduced noise and enhanced the liver boundaries in 3D abdominal MR images. The rough liver shape was revealed fully automatically by using the watershed segmentation, thresholding transform, morphological operations, and statistical properties of the liver. An active contour model was applied to refine the rough liver shape to precisely obtain the liver boundaries. The liver volumes calculated by the proposed scheme were compared to the "gold standard" references which were estimated by an expert abdominal radiologist. The liver volumes computed by using our developed scheme excellently agreed (Intra-class correlation coefficient was 0.94) with the "gold standard" manual volumes by the radiologist in the evaluation with 27 cases from multiple medical centers. The running time was 8.4 min per case on average. We developed a fully automated liver volumetry scheme in MR, which does not require any interaction by users. It was evaluated with cases from multiple medical centers. The liver volumetry performance of our developed system was comparable to that of the gold standard manual volumetry, and it saved radiologists' time for manual liver volumetry of 24.7 min per case.

  20. Laser Balancing

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Mechanical Technology, Incorporated developed a fully automatic laser machining process that allows more precise balancing removes metal faster, eliminates excess metal removal and other operator induced inaccuracies, and provides significant reduction in balancing time. Manufacturing costs are reduced as a result.

  1. Recent Progress on the Stretched Lens Array (SLA)

    NASA Technical Reports Server (NTRS)

    O'Neill, Markl; McDanal, A. J.; Piszczor, Michael; George, Patrick; Eskenazi, Michael; Botke, Matthew; Edwards, David; Hoppe, David; Brandhorst, Henry

    2005-01-01

    At the last Space Photovoltaic Research and Technology Conference, SPRAT XVII, held during the fateful week of 9/11/01, our team presented a paper on the early developments related to the new Stretched Lens Array (SLA), including its evolution from the successful SCARLET array on the NASA/JPL Deep Space 1 spacecraft. Within the past two years, the SLA team has made significant progress in the SLA technology, including the successful fabrication and testing of a complete four-panel prototype solar array wing (Fig. 1). The prototype wing verified the mechanical and structural design of the rigid-panel SLA approach, including multiple successful demonstrations of automatic wing deployment. One panel in the prototype wing included four fully functional photovoltaic receivers, employing triple-junction solar cells.

  2. CleAir Monitoring System for Particulate Matter: A Case in the Napoleonic Museum in Rome

    PubMed Central

    Bonacquisti, Valerio; Di Michele, Marta; Frasca, Francesca; Chianese, Angelo; Siani, Anna Maria

    2017-01-01

    Monitoring the air particulate concentration both outdoors and indoors is becoming a more relevant issue in the past few decades. An innovative, fully automatic, monitoring system called CleAir is presented. Such a system wants to go beyond the traditional technique (gravimetric analysis), allowing for a double monitoring approach: the traditional gravimetric analysis as well as the optical spectroscopic analysis of the scattering on the same filters in steady-state conditions. The experimental data are interpreted in terms of light percolation through highly scattering matter by means of the stretched exponential evolution. CleAir has been applied to investigate the daily distribution of particulate matter within the Napoleonic Museum in Rome as a test case. PMID:28892016

  3. Geometric aspects in digital analysis of Multi-Spectral Scanner (MSS) data

    NASA Technical Reports Server (NTRS)

    Mikhail, E. M.; Baker, J. R.

    1973-01-01

    Present automated systems of interpretation which apply pattern recognition techniques on MSS data do not fully consider the geometry of the acquisition system. In an effort to improve the usefulness of the MSS data when digitally treated, geometric aspects are analyzed and discussed. Attempts to correct for scanner instabilities in position and orientation by affine and polynomial transformations, as well as by modified collinearity equations are described. Methods of accounting for panoramic and relief effects are also discussed. It is anticipated that reliable area as well as position determinations can be accomplished during the process of automatic interpretation. A concept for a unified approach to the treatment of remote sensing data, both metric and nonmetric is presented.

  4. The Potential of Automatic Word Comparison for Historical Linguistics.

    PubMed

    List, Johann-Mattis; Greenhill, Simon J; Gray, Russell D

    2017-01-01

    The amount of data from languages spoken all over the world is rapidly increasing. Traditional manual methods in historical linguistics need to face the challenges brought by this influx of data. Automatic approaches to word comparison could provide invaluable help to pre-analyze data which can be later enhanced by experts. In this way, computational approaches can take care of the repetitive and schematic tasks leaving experts to concentrate on answering interesting questions. Here we test the potential of automatic methods to detect etymologically related words (cognates) in cross-linguistic data. Using a newly compiled database of expert cognate judgments across five different language families, we compare how well different automatic approaches distinguish related from unrelated words. Our results show that automatic methods can identify cognates with a very high degree of accuracy, reaching 89% for the best-performing method Infomap. We identify the specific strengths and weaknesses of these different methods and point to major challenges for future approaches. Current automatic approaches for cognate detection-although not perfect-could become an important component of future research in historical linguistics.

  5. The Potential of Automatic Word Comparison for Historical Linguistics

    PubMed Central

    Greenhill, Simon J.; Gray, Russell D.

    2017-01-01

    The amount of data from languages spoken all over the world is rapidly increasing. Traditional manual methods in historical linguistics need to face the challenges brought by this influx of data. Automatic approaches to word comparison could provide invaluable help to pre-analyze data which can be later enhanced by experts. In this way, computational approaches can take care of the repetitive and schematic tasks leaving experts to concentrate on answering interesting questions. Here we test the potential of automatic methods to detect etymologically related words (cognates) in cross-linguistic data. Using a newly compiled database of expert cognate judgments across five different language families, we compare how well different automatic approaches distinguish related from unrelated words. Our results show that automatic methods can identify cognates with a very high degree of accuracy, reaching 89% for the best-performing method Infomap. We identify the specific strengths and weaknesses of these different methods and point to major challenges for future approaches. Current automatic approaches for cognate detection—although not perfect—could become an important component of future research in historical linguistics. PMID:28129337

  6. On the dependence of information display quality requirements upon human characteristics and pilot/automatics relations

    NASA Technical Reports Server (NTRS)

    Wilckens, V.

    1972-01-01

    Present information display concepts for pilot landing guidance are outlined considering manual control as well as substitution of man by fully competent automatics. Display improvements are achieved by compressing the distributed indicators into an accumulative display and thus reducing information scanning. Complete integration of quantitative indications, outer loop information, and real world display in a pictorial information channel geometry constitutes an interface with human ability to differentiate and integrate for optimal manual control of the aircraft.

  7. Fully automatic assignment of small molecules' NMR spectra without relying on chemical shift predictions.

    PubMed

    Castillo, Andrés M; Bernal, Andrés; Patiny, Luc; Wist, Julien

    2015-08-01

    We present a method for the automatic assignment of small molecules' NMR spectra. The method includes an automatic and novel self-consistent peak-picking routine that validates NMR peaks in each spectrum against peaks in the same or other spectra that are due to the same resonances. The auto-assignment routine used is based on branch-and-bound optimization and relies predominantly on integration and correlation data; chemical shift information may be included when available to fasten the search and shorten the list of viable assignments, but in most cases tested, it is not required in order to find the correct assignment. This automatic assignment method is implemented as a web-based tool that runs without any user input other than the acquired spectra. Copyright © 2015 John Wiley & Sons, Ltd.

  8. When wanting to change is not enough: automatic appetitive processes moderate the effects of a brief alcohol intervention in hazardous-drinking college students.

    PubMed

    Ostafin, Brian D; Palfai, Tibor P

    2012-12-07

    Research indicates that brief motivational interventions are efficacious treatments for hazardous drinking. Little is known, however, about the psychological processes that may moderate intervention success. Based on growing evidence that drinking behavior may be influenced by automatic (nonvolitional) mental processes, the current study examined whether automatic alcohol-approach associations moderated the effect of a brief motivational intervention. Specifically, we examined whether the efficacy of a single-session intervention designed to increase motivation to reduce alcohol consumption would be moderated by the strength of participants' automatic alcohol-approach associations. Eighty-seven undergraduate hazardous drinkers participated for course credit. Participants completed an Implicit Association Test to measure automatic alcohol-approach associations, a baseline measure of readiness to change drinking behavior, and measures of alcohol involvement. Participants were then randomly assigned to either a brief (15-minute) motivational intervention or a control condition. Participants completed a measure of readiness to change drinking at the end of the first session and returned for a follow-up session six weeks later in which they reported on their drinking over the previous month. Compared with the control group, those in the intervention condition showed higher readiness to change drinking at the end of the baseline session but did not show decreased drinking quantity at follow-up. Automatic alcohol-approach associations moderated the effects of the intervention on change in drinking quantity. Among participants in the intervention group, those with weak automatic alcohol-approach associations showed greater reductions in the amount of alcohol consumed per occasion at follow-up compared with those with strong automatic alcohol-approach associations. Automatic appetitive associations with alcohol were not related with change in amount of alcohol consumed per occasion in control participants. Furthermore, among participants who showed higher readiness to change, those who exhibited weaker alcohol-approach associations showed greater reductions in drinking quantity compared with those who exhibited stronger alcohol-approach associations. The results support the idea that automatic mental processes may moderate the influence of brief motivational interventions on quantity of alcohol consumed per drinking occasion. The findings suggest that intervention efficacy may be improved by utilizing implicit measures to identify those who may be responsive to brief interventions and by developing intervention elements to address the influence of automatic processes on drinking behavior.

  9. 24 CFR 1710.506 - State/Federal filing requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... fully explaining the purpose and significance of the amendment and referring to that section and page of... automatically suspended as a result of the state action. No action need be taken by the Secretary to effect the...

  10. 24 CFR 1710.506 - State/Federal filing requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... fully explaining the purpose and significance of the amendment and referring to that section and page of... automatically suspended as a result of the state action. No action need be taken by the Secretary to effect the...

  11. Xenon International Automated Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-08-05

    The Xenon International Automated Control software monitors, displays status, and allows for manual operator control as well as fully automatic control of multiple commercial and PNNL designed hardware components to generate and transmit atmospheric radioxenon concentration measurements every six hours.

  12. A fully-automatic fast segmentation of the sub-basal layer nerves in corneal images.

    PubMed

    Guimarães, Pedro; Wigdahl, Jeff; Poletti, Enea; Ruggeri, Alfredo

    2014-01-01

    Corneal nerves changes have been linked to damage caused by surgical interventions or prolonged contact lens wear. Furthermore nerve tortuosity has been shown to correlate with the severity of diabetic neuropathy. For these reasons there has been an increasing interest on the analysis of these structures. In this work we propose a novel, robust, and fast fully automatic algorithm capable of tracing the sub-basal plexus nerves from human corneal confocal images. We resort to logGabor filters and support vector machines to trace the corneal nerves. The proposed algorithm traced most of the corneal nerves correctly (sensitivity of 0.88 ± 0.06 and false discovery rate of 0.08 ± 0.06). The displayed performance is comparable to a human grader. We believe that the achieved processing time (0.661 ± 0.07 s) and tracing quality are major advantages for the daily clinical practice.

  13. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  14. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    NASA Astrophysics Data System (ADS)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  15. Deep residual networks for automatic segmentation of laparoscopic videos of the liver

    NASA Astrophysics Data System (ADS)

    Gibson, Eli; Robu, Maria R.; Thompson, Stephen; Edwards, P. Eddie; Schneider, Crispin; Gurusamy, Kurinchi; Davidson, Brian; Hawkes, David J.; Barratt, Dean C.; Clarkson, Matthew J.

    2017-03-01

    Motivation: For primary and metastatic liver cancer patients undergoing liver resection, a laparoscopic approach can reduce recovery times and morbidity while offering equivalent curative results; however, only about 10% of tumours reside in anatomical locations that are currently accessible for laparoscopic resection. Augmenting laparoscopic video with registered vascular anatomical models from pre-procedure imaging could support using laparoscopy in a wider population. Segmentation of liver tissue on laparoscopic video supports the robust registration of anatomical liver models by filtering out false anatomical correspondences between pre-procedure and intra-procedure images. In this paper, we present a convolutional neural network (CNN) approach to liver segmentation in laparoscopic liver procedure videos. Method: We defined a CNN architecture comprising fully-convolutional deep residual networks with multi-resolution loss functions. The CNN was trained in a leave-one-patient-out cross-validation on 2050 video frames from 6 liver resections and 7 laparoscopic staging procedures, and evaluated using the Dice score. Results: The CNN yielded segmentations with Dice scores >=0.95 for the majority of images; however, the inter-patient variability in median Dice score was substantial. Four failure modes were identified from low scoring segmentations: minimal visible liver tissue, inter-patient variability in liver appearance, automatic exposure correction, and pathological liver tissue that mimics non-liver tissue appearance. Conclusion: CNNs offer a feasible approach for accurately segmenting liver from other anatomy on laparoscopic video, but additional data or computational advances are necessary to address challenges due to the high inter-patient variability in liver appearance.

  16. 3D mapping of airway wall thickening in asthma with MSCT: a level set approach

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Brillet, Pierre-Yves; Hartley, Ruth; Grenier, Philippe A.; Brightling, Christopher

    2014-03-01

    Assessing the airway wall thickness in multi slice computed tomography (MSCT) as image marker for airway disease phenotyping such asthma and COPD is a current trend and challenge for the scientific community working in lung imaging. This paper addresses the same problem from a different point of view: considering the expected wall thickness-to-lumen-radius ratio for a normal subject as known and constant throughout the whole airway tree, the aim is to build up a 3D map of airway wall regions of larger thickness and to define an overall score able to highlight a pathological status. In this respect, the local dimension (caliber) of the previously segmented airway lumen is obtained on each point by exploiting the granulometry morphological operator. A level set function is defined based on this caliber information and on the expected wall thickness ratio, which allows obtaining a good estimate of the airway wall throughout all segmented lumen generations. Next, the vascular (or mediastinal dense tissue) contact regions are automatically detected and excluded from analysis. For the remaining airway wall border points, the real wall thickness is estimated based on the tissue density analysis in the airway radial direction; thick wall points are highlighted on a 3D representation of the airways and several quantification scores are defined. The proposed approach is fully automatic and was evaluated (proof of concept) on a patient selection coming from different databases including mild, severe asthmatics and normal cases. This preliminary evaluation confirms the discriminative power of the proposed approach regarding different phenotypes and is currently extending to larger cohorts.

  17. Automatic spatiotemporal matching of detected pleural thickenings

    NASA Astrophysics Data System (ADS)

    Chaisaowong, Kraisorn; Keller, Simon Kai; Kraus, Thomas

    2014-01-01

    Pleural thickenings can be found in asbestos exposed patient's lung. Non-invasive diagnosis including CT imaging can detect aggressive malignant pleural mesothelioma in its early stage. In order to create a quantitative documentation of automatic detected pleural thickenings over time, the differences in volume and thickness of the detected thickenings have to be calculated. Physicians usually estimate the change of each thickening via visual comparison which provides neither quantitative nor qualitative measures. In this work, automatic spatiotemporal matching techniques of the detected pleural thickenings at two points of time based on the semi-automatic registration have been developed, implemented, and tested so that the same thickening can be compared fully automatically. As result, the application of the mapping technique using the principal components analysis turns out to be advantageous than the feature-based mapping using centroid and mean Hounsfield Units of each thickening, since the resulting sensitivity was improved to 98.46% from 42.19%, while the accuracy of feature-based mapping is only slightly higher (84.38% to 76.19%).

  18. Automatic segmentation of the glenohumeral cartilages from magnetic resonance images.

    PubMed

    Neubert, A; Yang, Z; Engstrom, C; Xia, Y; Strudwick, M W; Chandra, S S; Fripp, J; Crozier, S

    2016-10-01

    Magnetic resonance (MR) imaging plays a key role in investigating early degenerative disorders and traumatic injuries of the glenohumeral cartilages. Subtle morphometric and biochemical changes of potential relevance to clinical diagnosis, treatment planning, and evaluation can be assessed from measurements derived from in vivo MR segmentation of the cartilages. However, segmentation of the glenohumeral cartilages, using approaches spanning manual to automated methods, is technically challenging, due to their thin, curved structure and overlapping intensities of surrounding tissues. Automatic segmentation of the glenohumeral cartilages from MR imaging is not at the same level compared to the weight-bearing knee and hip joint cartilages despite the potential applications with respect to clinical investigation of shoulder disorders. In this work, the authors present a fully automated segmentation method for the glenohumeral cartilages using MR images of healthy shoulders. The method involves automated segmentation of the humerus and scapula bones using 3D active shape models, the extraction of the expected bone-cartilage interface, and cartilage segmentation using a graph-based method. The cartilage segmentation uses localization, patient specific tissue estimation, and a model of the cartilage thickness variation. The accuracy of this method was experimentally validated using a leave-one-out scheme on a database of MR images acquired from 44 asymptomatic subjects with a true fast imaging with steady state precession sequence on a 3 T scanner (Siemens Trio) using a dedicated shoulder coil. The automated results were compared to manual segmentations from two experts (an experienced radiographer and an experienced musculoskeletal anatomist) using the Dice similarity coefficient (DSC) and mean absolute surface distance (MASD) metrics. Accurate and precise bone segmentations were achieved with mean DSC of 0.98 and 0.93 for the humeral head and glenoid fossa, respectively. Mean DSC scores of 0.74 and 0.72 were obtained for the humeral and glenoid cartilage volumes, respectively. The manual interobserver reliability evaluated by DSC was 0.80 ± 0.03 and 0.76 ± 0.04 for the two cartilages, implying that the automated results were within an acceptable 10% difference. The MASD between the automatic and the corresponding manual cartilage segmentations was less than 0.4 mm (previous studies reported mean cartilage thickness of 1.3 mm). This work shows the feasibility of volumetric segmentation and separation of the glenohumeral cartilages from MR images. To their knowledge, this is the first fully automated algorithm for volumetric segmentation of the individual glenohumeral cartilages from MR images. The approach was validated against manual segmentations from experienced analysts. In future work, the approach will be validated on imaging datasets acquired with various MR contrasts in patients.

  19. Automatic segmentation of the glenohumeral cartilages from magnetic resonance images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neubert, A., E-mail: ales.neubert@csiro.au

    Purpose: Magnetic resonance (MR) imaging plays a key role in investigating early degenerative disorders and traumatic injuries of the glenohumeral cartilages. Subtle morphometric and biochemical changes of potential relevance to clinical diagnosis, treatment planning, and evaluation can be assessed from measurements derived from in vivo MR segmentation of the cartilages. However, segmentation of the glenohumeral cartilages, using approaches spanning manual to automated methods, is technically challenging, due to their thin, curved structure and overlapping intensities of surrounding tissues. Automatic segmentation of the glenohumeral cartilages from MR imaging is not at the same level compared to the weight-bearing knee and hipmore » joint cartilages despite the potential applications with respect to clinical investigation of shoulder disorders. In this work, the authors present a fully automated segmentation method for the glenohumeral cartilages using MR images of healthy shoulders. Methods: The method involves automated segmentation of the humerus and scapula bones using 3D active shape models, the extraction of the expected bone–cartilage interface, and cartilage segmentation using a graph-based method. The cartilage segmentation uses localization, patient specific tissue estimation, and a model of the cartilage thickness variation. The accuracy of this method was experimentally validated using a leave-one-out scheme on a database of MR images acquired from 44 asymptomatic subjects with a true fast imaging with steady state precession sequence on a 3 T scanner (Siemens Trio) using a dedicated shoulder coil. The automated results were compared to manual segmentations from two experts (an experienced radiographer and an experienced musculoskeletal anatomist) using the Dice similarity coefficient (DSC) and mean absolute surface distance (MASD) metrics. Results: Accurate and precise bone segmentations were achieved with mean DSC of 0.98 and 0.93 for the humeral head and glenoid fossa, respectively. Mean DSC scores of 0.74 and 0.72 were obtained for the humeral and glenoid cartilage volumes, respectively. The manual interobserver reliability evaluated by DSC was 0.80 ± 0.03 and 0.76 ± 0.04 for the two cartilages, implying that the automated results were within an acceptable 10% difference. The MASD between the automatic and the corresponding manual cartilage segmentations was less than 0.4 mm (previous studies reported mean cartilage thickness of 1.3 mm). Conclusions: This work shows the feasibility of volumetric segmentation and separation of the glenohumeral cartilages from MR images. To their knowledge, this is the first fully automated algorithm for volumetric segmentation of the individual glenohumeral cartilages from MR images. The approach was validated against manual segmentations from experienced analysts. In future work, the approach will be validated on imaging datasets acquired with various MR contrasts in patients.« less

  20. A repeated-measures analysis of the effects of soft tissues on wrist range of motion in the extant phylogenetic bracket of dinosaurs: Implications for the functional origins of an automatic wrist folding mechanism in Crocodilia.

    PubMed

    Hutson, Joel David; Hutson, Kelda Nadine

    2014-07-01

    A recent study hypothesized that avian-like wrist folding in quadrupedal dinosaurs could have aided their distinctive style of locomotion with semi-pronated and therefore medially facing palms. However, soft tissues that automatically guide avian wrist folding rarely fossilize, and automatic wrist folding of unknown function in extant crocodilians has not been used to test this hypothesis. Therefore, an investigation of the relative contributions of soft tissues to wrist range of motion (ROM) in the extant phylogenetic bracket of dinosaurs, and the quadrupedal function of crocodilian wrist folding, could inform these questions. Here, we repeatedly measured wrist ROM in degrees through fully fleshed, skinned, minus muscles/tendons, minus ligaments, and skeletonized stages in the American alligator Alligator mississippiensis and the ostrich Struthio camelus. The effects of dissection treatment and observer were statistically significant for alligator wrist folding and ostrich wrist flexion, but not ostrich wrist folding. Final skeletonized wrist folding ROM was higher than (ostrich) or equivalent to (alligator) initial fully fleshed ROM, while final ROM was lower than initial ROM for ostrich wrist flexion. These findings suggest that, unlike the hinge/ball and socket-type elbow and shoulder joints in these archosaurs, ROM within gliding/planar diarthrotic joints is more restricted to the extent of articular surfaces. The alligator data indicate that the crocodilian wrist mechanism functions to automatically lock their semi-pronated palms into a rigid column, which supports the hypothesis that this palmar orientation necessitated soft tissue stiffening mechanisms in certain dinosaurs, although ROM-restricted articulations argue against the presence of an extensive automatic mechanism. Anat Rec, 297:1228-1249, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.

  1. Flight test results from the CV990 simulated space shuttle during unpowered automatic approaches and landings

    NASA Technical Reports Server (NTRS)

    Edwards, F. G.; Foster, J. D.

    1973-01-01

    Unpowered automatic approaches and landings with a CV990 aircraft were conducted to study navigation, guidance, and control problems associated with terminal area approach and landing for the space shuttle. The flight tests were designed to study from 11,300 m to touchdown the performance of a navigation and guidance concept which utilized blended radio/inertial navigation using VOR, DME, and ILS as the ground navigation aids. In excess of fifty automatic approaches and landings were conducted. Preliminary results indicate that this concept may provide sufficient accuracy to accomplish automatic landing of the shuttle orbiter without air-breathing engines on a conventional size runway.

  2. Equation-oriented specification of neural models for simulations

    PubMed Central

    Stimberg, Marcel; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain

    2013-01-01

    Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator. PMID:24550820

  3. Acoustic emission source location in complex structures using full automatic delta T mapping technique

    NASA Astrophysics Data System (ADS)

    Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys

    2016-05-01

    An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.

  4. Fully automated urban traffic system

    NASA Technical Reports Server (NTRS)

    Dobrotin, B. M.; Hansen, G. R.; Peng, T. K. C.; Rennels, D. A.

    1977-01-01

    The replacement of the driver with an automatic system which could perform the functions of guiding and routing a vehicle with a human's capability of responding to changing traffic demands was discussed. The problem was divided into four technological areas; guidance, routing, computing, and communications. It was determined that the latter three areas being developed independent of any need for fully automated urban traffic. A guidance system that would meet system requirements was not being developed but was technically feasible.

  5. Building Extraction from Remote Sensing Data Using Fully Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Bittner, K.; Cui, S.; Reinartz, P.

    2017-05-01

    Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.

  6. AWSCS-A System to Evaluate Different Approaches for the Automatic Composition and Execution of Web Services Flows

    PubMed Central

    Tardiole Kuehne, Bruno; Estrella, Julio Cezar; Nunes, Luiz Henrique; Martins de Oliveira, Edvard; Hideo Nakamura, Luis; Gomes Ferreira, Carlos Henrique; Carlucci Santana, Regina Helena; Reiff-Marganiec, Stephan; Santana, Marcos José

    2015-01-01

    This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production. PMID:26068216

  7. AWSCS-A System to Evaluate Different Approaches for the Automatic Composition and Execution of Web Services Flows.

    PubMed

    Tardiole Kuehne, Bruno; Estrella, Julio Cezar; Nunes, Luiz Henrique; Martins de Oliveira, Edvard; Hideo Nakamura, Luis; Gomes Ferreira, Carlos Henrique; Carlucci Santana, Regina Helena; Reiff-Marganiec, Stephan; Santana, Marcos José

    2015-01-01

    This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production.

  8. Using Psychometric Technology in Educational Assessment: The Case of a Schema-Based Isomorphic Approach to the Automatic Generation of Quantitative Reasoning Items

    ERIC Educational Resources Information Center

    Arendasy, Martin; Sommer, Markus

    2007-01-01

    This article deals with the investigation of the psychometric quality and constructs validity of algebra word problems generated by means of a schema-based version of the automatic min-max approach. Based on review of the research literature in algebra word problem solving and automatic item generation this new approach is introduced as a…

  9. New York State Thruway Authority automatic vehicle classification (AVC) : research report.

    DOT National Transportation Integrated Search

    2008-03-31

    In December 2007, the N.Y.S. Thruway Authority (Thruway) concluded a Federal : funded research effort to study technology and develop a design for retrofitting : devices required in implementing a fully automated vehicle classification system i...

  10. Automatic Fastening Large Structures: a New Approach

    NASA Technical Reports Server (NTRS)

    Lumley, D. F.

    1985-01-01

    The external tank (ET) intertank structure for the space shuttle, a 27.5 ft diameter 22.5 ft long externally stiffened mechanically fastened skin-stringer-frame structure, was a labor intensitive manual structure built on a modified Saturn tooling position. A new approach was developed based on half-section subassemblies. The heart of this manufacturing approach will be 33 ft high vertical automatic riveting system with a 28 ft rotary positioner coming on-line in mid 1985. The Automatic Riveting System incorporates many of the latest automatic riveting technologies. Key features include: vertical columns with two sets of independently operating CNC drill-riveting heads; capability of drill, insert and upset any one piece fastener up to 3/8 inch diameter including slugs without displacing the workpiece offset bucking ram with programmable rotation and deep retraction; vision system for automatic parts program re-synchronization and part edge margin control; and an automatic rivet selection/handling system.

  11. Real-time automatic registration in optical surgical navigation

    NASA Astrophysics Data System (ADS)

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming

    2016-05-01

    An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.

  12. Automatic Testcase Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.

  13. Automatic weld torch guidance control system

    NASA Technical Reports Server (NTRS)

    Smaith, H. E.; Wall, W. A.; Burns, M. R., Jr.

    1982-01-01

    A highly reliable, fully digital, closed circuit television optical, type automatic weld seam tracking control system was developed. This automatic tracking equipment is used to reduce weld tooling costs and increase overall automatic welding reliability. The system utilizes a charge injection device digital camera which as 60,512 inidividual pixels as the light sensing elements. Through conventional scanning means, each pixel in the focal plane is sequentially scanned, the light level signal digitized, and an 8-bit word transmitted to scratch pad memory. From memory, the microprocessor performs an analysis of the digital signal and computes the tracking error. Lastly, the corrective signal is transmitted to a cross seam actuator digital drive motor controller to complete the closed loop, feedback, tracking system. This weld seam tracking control system is capable of a tracking accuracy of + or - 0.2 mm, or better. As configured, the system is applicable to square butt, V-groove, and lap joint weldments.

  14. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciller, Carlos, E-mail: carlos.cillerruiz@unil.ch; Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern; Centre d’Imagerie BioMédicale, University of Lausanne, Lausanne

    Purpose: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Methods and Materials: Manualmore » and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.« less

  15. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma.

    PubMed

    Ciller, Carlos; De Zanet, Sandro I; Rüegsegger, Michael B; Pica, Alessia; Sznitman, Raphael; Thiran, Jean-Philippe; Maeder, Philippe; Munier, Francis L; Kowal, Jens H; Cuadra, Meritxell Bach

    2015-07-15

    Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Automated design of infrared digital metamaterials by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sugino, Yuya; Ishikawa, Atsushi; Hayashi, Yasuhiko; Tsuruta, Kenji

    2017-08-01

    We demonstrate automatic design of infrared (IR) metamaterials using a genetic algorithm (GA) and experimentally characterize their IR properties. To implement the automated design scheme of the metamaterial structures, we adopt a digital metamaterial consisting of 7 × 7 Au nano-pixels with an area of 200 nm × 200 nm, and their placements are coded as binary genes in the GA optimization process. The GA combined with three-dimensional (3D) finite element method (FEM) simulation is developed and applied to automatically construct a digital metamaterial to exhibit pronounced plasmonic resonances at the target IR frequencies. Based on the numerical results, the metamaterials are fabricated on a Si substrate over an area of 1 mm × 1 mm by using an EB lithography, Cr/Au (2/20 nm) depositions, and liftoff process. In the FT-IR measurement, pronounced plasmonic responses of each metamaterial are clearly observed near the targeted frequencies, although the synthesized pixel arrangements of the metamaterials are seemingly random. The corresponding numerical simulations reveal the important resonant behavior of each pixel and their hybridized systems. Our approach is fully computer-aided without artificial manipulation, thus paving the way toward the novel device design for next-generation plasmonic device applications.

  17. A simplified financial model for automatic meter reading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, S.M.

    1994-01-15

    The financial model proposed here (which can be easily adapted for electric, gas, or water) combines aspects of [open quotes]life cycle,[close quotes] [open quotes]consumer value[close quotes] and [open quotes]revenue based[close quotes] approaches and addresses intangible benefits. A simple value tree of one-word descriptions clarifies the relationship between level of investment and level of value, visually relating increased value to increased cost. The model computes the numerical present values of capital costs, recurring costs, and revenue benefits over a 15-year period for the seven configurations: manual reading of existing or replacement standard meters (MMR), manual reading using electronic, hand-held retrievers (EMR),more » remote reading of inaccessible meters via hard-wired receptacles (RMR), remote reading of meters adapted with pulse generators (RMR-P), remote reading of meters adapted with absolute dial encoders (RMR-E), offsite reading over a few hundred feet with mobile radio (OMR), and fully automatic reading using telephone or an equivalent network (AMR). In the model, of course, the costs of installing the configurations are clearly listed under each column. The model requires only four annualized inputs and seven fixed-cost inputs that are rather easy to obtain.« less

  18. 3D Convolutional Neural Network for Automatic Detection of Lung Nodules in Chest CT.

    PubMed

    Hamidian, Sardar; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2017-01-01

    Deep convolutional neural networks (CNNs) form the backbone of many state-of-the-art computer vision systems for classification and segmentation of 2D images. The same principles and architectures can be extended to three dimensions to obtain 3D CNNs that are suitable for volumetric data such as CT scans. In this work, we train a 3D CNN for automatic detection of pulmonary nodules in chest CT images using volumes of interest extracted from the LIDC dataset. We then convert the 3D CNN which has a fixed field of view to a 3D fully convolutional network (FCN) which can generate the score map for the entire volume efficiently in a single pass. Compared to the sliding window approach for applying a CNN across the entire input volume, the FCN leads to a nearly 800-fold speed-up, and thereby fast generation of output scores for a single case. This screening FCN is used to generate difficult negative examples that are used to train a new discriminant CNN. The overall system consists of the screening FCN for fast generation of candidate regions of interest, followed by the discrimination CNN.

  19. 3D convolutional neural network for automatic detection of lung nodules in chest CT

    NASA Astrophysics Data System (ADS)

    Hamidian, Sardar; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2017-03-01

    Deep convolutional neural networks (CNNs) form the backbone of many state-of-the-art computer vision systems for classification and segmentation of 2D images. The same principles and architectures can be extended to three dimensions to obtain 3D CNNs that are suitable for volumetric data such as CT scans. In this work, we train a 3D CNN for automatic detection of pulmonary nodules in chest CT images using volumes of interest extracted from the LIDC dataset. We then convert the 3D CNN which has a fixed field of view to a 3D fully convolutional network (FCN) which can generate the score map for the entire volume efficiently in a single pass. Compared to the sliding window approach for applying a CNN across the entire input volume, the FCN leads to a nearly 800-fold speed-up, and thereby fast generation of output scores for a single case. This screening FCN is used to generate difficult negative examples that are used to train a new discriminant CNN. The overall system consists of the screening FCN for fast generation of candidate regions of interest, followed by the discrimination CNN.

  20. Automatic Approach for Lung Segmentation with Juxta-Pleural Nodules from Thoracic CT Based on Contour Tracing and Correction.

    PubMed

    Wang, Jinke; Guo, Haoyan

    2016-01-01

    This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD) 11.15 ± 69.63 cm 3 , volume overlap error (VOE) 3.5057 ± 1.3719%, average surface distance (ASD) 0.7917 ± 0.2741 mm, root mean square distance (RMSD) 1.6957 ± 0.6568 mm, maximum symmetric absolute surface distance (MSD) 21.3430 ± 8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.

  1. An FMS Dynamic Production Scheduling Algorithm Considering Cutting Tool Failure and Cutting Tool Life

    NASA Astrophysics Data System (ADS)

    Setiawan, A.; Wangsaputra, R.; Martawirya, Y. Y.; Halim, A. H.

    2016-02-01

    This paper deals with Flexible Manufacturing System (FMS) production rescheduling due to unavailability of cutting tools caused either of cutting tool failure or life time limit. The FMS consists of parallel identical machines integrated with an automatic material handling system and it runs fully automatically. Each machine has a same cutting tool configuration that consists of different geometrical cutting tool types on each tool magazine. The job usually takes two stages. Each stage has sequential operations allocated to machines considering the cutting tool life. In the real situation, the cutting tool can fail before the cutting tool life is reached. The objective in this paper is to develop a dynamic scheduling algorithm when a cutting tool is broken during unmanned and a rescheduling needed. The algorithm consists of four steps. The first step is generating initial schedule, the second step is determination the cutting tool failure time, the third step is determination of system status at cutting tool failure time and the fourth step is the rescheduling for unfinished jobs. The approaches to solve the problem are complete-reactive scheduling and robust-proactive scheduling. The new schedules result differences starting time and completion time of each operations from the initial schedule.

  2. Exploiting range imagery: techniques and applications

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter

    2009-07-01

    Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.

  3. Automatic Figure Ranking and User Interfacing for Intelligent Figure Search

    PubMed Central

    Yu, Hong; Liu, Feifan; Ramesh, Balaji Polepalli

    2010-01-01

    Background Figures are important experimental results that are typically reported in full-text bioscience articles. Bioscience researchers need to access figures to validate research facts and to formulate or to test novel research hypotheses. On the other hand, the sheer volume of bioscience literature has made it difficult to access figures. Therefore, we are developing an intelligent figure search engine (http://figuresearch.askhermes.org). Existing research in figure search treats each figure equally, but we introduce a novel concept of “figure ranking”: figures appearing in a full-text biomedical article can be ranked by their contribution to the knowledge discovery. Methodology/Findings We empirically validated the hypothesis of figure ranking with over 100 bioscience researchers, and then developed unsupervised natural language processing (NLP) approaches to automatically rank figures. Evaluating on a collection of 202 full-text articles in which authors have ranked the figures based on importance, our best system achieved a weighted error rate of 0.2, which is significantly better than several other baseline systems we explored. We further explored a user interfacing application in which we built novel user interfaces (UIs) incorporating figure ranking, allowing bioscience researchers to efficiently access important figures. Our evaluation results show that 92% of the bioscience researchers prefer as the top two choices the user interfaces in which the most important figures are enlarged. With our automatic figure ranking NLP system, bioscience researchers preferred the UIs in which the most important figures were predicted by our NLP system than the UIs in which the most important figures were randomly assigned. In addition, our results show that there was no statistical difference in bioscience researchers' preference in the UIs generated by automatic figure ranking and UIs by human ranking annotation. Conclusion/Significance The evaluation results conclude that automatic figure ranking and user interfacing as we reported in this study can be fully implemented in online publishing. The novel user interface integrated with the automatic figure ranking system provides a more efficient and robust way to access scientific information in the biomedical domain, which will further enhance our existing figure search engine to better facilitate accessing figures of interest for bioscientists. PMID:20949102

  4. Learning-based automatic detection of severe coronary stenoses in CT angiographies

    NASA Astrophysics Data System (ADS)

    Melki, Imen; Cardon, Cyril; Gogin, Nicolas; Talbot, Hugues; Najman, Laurent

    2014-03-01

    3D cardiac computed tomography angiography (CCTA) is becoming a standard routine for non-invasive heart diseases diagnosis. Thanks to its high negative predictive value, CCTA is increasingly used to decide whether or not the patient should be considered for invasive angiography. However, an accurate assessment of cardiac lesions using this modality is still a time consuming task and needs a high degree of clinical expertise. Thus, providing automatic tool to assist clinicians during the diagnosis task is highly desirable. In this work, we propose a fully automatic approach for accurate severe cardiac stenoses detection. Our algorithm uses the Random Forest classi cation to detect stenotic areas. First, the classi er is trained on 18 CT cardiac exams with CTA reference standard. Then, then classi cation result is used to detect severe stenoses (with a narrowing degree higher than 50%) in a 30 cardiac CT exam database. Features that best captures the di erent stenoses con guration are extracted along the vessel centerlines at di erent scales. To ensure the accuracy against the vessel direction and scale changes, we extract features inside cylindrical patterns with variable directions and radii. Thus, we make sure that the ROIs contains only the vessel walls. The algorithm is evaluated using the Rotterdam Coronary Artery Stenoses Detection and Quantication Evaluation Framework. The evaluation is performed using reference standard quanti cations obtained from quantitative coronary angiography (QCA) and consensus reading of CTA. The obtained results show that we can reliably detect severe stenosis with a sensitivity of 64%.

  5. Automatic localization of backscattering events due to particulate in urban areas

    NASA Astrophysics Data System (ADS)

    Gaudio, P.; Gelfusa, M.; Malizia, Andrea; Parracino, Stefano; Richetta, M.; Murari, A.; Vega, J.

    2014-10-01

    Particulate matter (PM), emitted by vehicles in urban traffic, can greatly affect environment air quality and have direct implications on both human health and infrastructure integrity. The consequences for society are relevant and can impact also on national health. Limits and thresholds of pollutants emitted by vehicles are typically regulated by government agencies. In the last few years, the interest in PM emissions has grown substantially due to both air quality issues and global warming. Lidar-Dial techniques are widely recognized as a costeffective alternative to monitor large regions of the atmosphere. To maximize the effectiveness of the measurements and to guarantee reliable, automatic monitoring of large areas, new data analysis techniques are required. In this paper, an original tool, the Universal Multi-Event Locator (UMEL), is applied to the problem of automatically indentifying the time location of peaks in Lidar measurements for the detection of particulate matter emitted by anthropogenic sources like vehicles. The method developed is based on Support Vector Regression and presents various advantages with respect to more traditional techniques. In particular, UMEL is based on the morphological properties of the signals and therefore the method is insensitive to the details of the noise present in the detection system. The approach is also fully general, purely software and can therefore be applied to a large variety of problems without any additional cost. The potential of the proposed technique is exemplified with the help of data acquired during an experimental campaign in the field in Rome.

  6. Distributed pheromone-based swarming control of unmanned air and ground vehicles for RSTA

    NASA Astrophysics Data System (ADS)

    Sauter, John A.; Mathews, Robert S.; Yinger, Andrew; Robinson, Joshua S.; Moody, John; Riddle, Stephanie

    2008-04-01

    The use of unmanned vehicles in Reconnaissance, Surveillance, and Target Acquisition (RSTA) applications has received considerable attention recently. Cooperating land and air vehicles can support multiple sensor modalities providing pervasive and ubiquitous broad area sensor coverage. However coordination of multiple air and land vehicles serving different mission objectives in a dynamic and complex environment is a challenging problem. Swarm intelligence algorithms, inspired by the mechanisms used in natural systems to coordinate the activities of many entities provide a promising alternative to traditional command and control approaches. This paper describes recent advances in a fully distributed digital pheromone algorithm that has demonstrated its effectiveness in managing the complexity of swarming unmanned systems. The results of a recent demonstration at NASA's Wallops Island of multiple Aerosonde Unmanned Air Vehicles (UAVs) and Pioneer Unmanned Ground Vehicles (UGVs) cooperating in a coordinated RSTA application are discussed. The vehicles were autonomously controlled by the onboard digital pheromone responding to the needs of the automatic target recognition algorithms. UAVs and UGVs controlled by the same pheromone algorithm self-organized to perform total area surveillance, automatic target detection, sensor cueing, and automatic target recognition with no central processing or control and minimal operator input. Complete autonomy adds several safety and fault tolerance requirements which were integrated into the basic pheromone framework. The adaptive algorithms demonstrated the ability to handle some unplanned hardware failures during the demonstration without any human intervention. The paper describes lessons learned and the next steps for this promising technology.

  7. Automatic brain caudate nuclei segmentation and classification in diagnostic of Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Igual, Laura; Soliva, Joan Carles; Escalera, Sergio; Gimeno, Roger; Vilarroya, Oscar; Radeva, Petia

    2012-12-01

    We present a fully automatic diagnostic imaging test for Attention-Deficit/Hyperactivity Disorder diagnosis assistance based on previously found evidences of caudate nucleus volumetric abnormalities. The proposed method consists of different steps: a new automatic method for external and internal segmentation of caudate based on Machine Learning methodologies; the definition of a set of new volume relation features, 3D Dissociated Dipoles, used for caudate representation and classification. We separately validate the contributions using real data from a pediatric population and show precise internal caudate segmentation and discrimination power of the diagnostic test, showing significant performance improvements in comparison to other state-of-the-art methods. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Automatic three-dimensional measurement of large-scale structure based on vision metrology.

    PubMed

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.

  9. Machine for Automatic Bacteriological Pour Plate Preparation

    PubMed Central

    Sharpe, A. N.; Biggs, D. R.; Oliver, R. J.

    1972-01-01

    A fully automatic system for preparing poured plates for bacteriological analyses has been constructed and tested. The machine can make decimal dilutions of bacterial suspensions, dispense measured amounts into petri dishes, add molten agar, mix the dish contents, and label the dishes with sample and dilution numbers at the rate of 2,000 dishes per 8-hr day. In addition, the machine can be programmed to select different media so that plates for different types of bacteriological analysis may be made automatically from the same sample. The machine uses only the components of the media and sterile polystyrene petri dishes; requirements for all other materials, such as sterile pipettes and capped bottles of diluents and agar, are eliminated. Images PMID:4560475

  10. Automatic extraction of road features in urban environments using dense ALS data

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Truong-Hong, Linh; Riveiro, Belén; Laefer, Debra

    2018-02-01

    This paper describes a methodology that automatically extracts semantic information from urban ALS data for urban parameterization and road network definition. First, building façades are segmented from the ground surface by combining knowledge-based information with both voxel and raster data. Next, heuristic rules and unsupervised learning are applied to the ground surface data to distinguish sidewalk and pavement points as a means for curb detection. Then radiometric information was employed for road marking extraction. Using high-density ALS data from Dublin, Ireland, this fully automatic workflow was able to generate a F-score close to 95% for pavement and sidewalk identification with a resolution of 20 cm and better than 80% for road marking detection.

  11. Automatic detection of sleep macrostructure based on a sensorized T-shirt.

    PubMed

    Bianchi, Anna M; Mendez, Martin O

    2010-01-01

    In the present work we apply a fully automatic procedure to the analysis of signal coming from a sensorized T-shit, worn during the night, for sleep evaluation. The goodness and reliability of the signals recorded trough the T-shirt was previously tested, while the employed algorithms for feature extraction and sleep classification were previously developed on standard ECG recordings and the obtained classification was compared to the standard clinical practice based on polysomnography (PSG). In the present work we combined T-shirt recordings and automatic classification and could obtain reliable sleep profiles, i.e. the sleep classification in WAKE, REM (rapid eye movement) and NREM stages, based on heart rate variability (HRV), respiration and movement signals.

  12. Automatic Approach Tendencies toward High and Low Caloric Food in Restrained Eaters: Influence of Task-Relevance and Mood

    PubMed Central

    Neimeijer, Renate A. M.; Roefs, Anne; Ostafin, Brian D.; de Jong, Peter J.

    2017-01-01

    Objective: Although restrained eaters are motivated to control their weight by dieting, they are often unsuccessful in these attempts. Dual process models emphasize the importance of differentiating between controlled and automatic tendencies to approach food. This study investigated the hypothesis that heightened automatic approach tendencies in restrained eaters would be especially prominent in contexts where food is irrelevant for their current tasks. Additionally, we examined the influence of mood on the automatic tendency to approach food as a function of dietary restraint. Methods: An Affective Simon Task-manikin was administered to measure automatic approach tendencies where food is task-irrelevant, and a Stimulus Response Compatibility task (SRC) to measure automatic approach in contexts where food is task-relevant, in 92 female participants varying in dietary restraint. Prior to the task, sad, stressed, neutral, or positive mood was induced. Food intake was measured during a bogus taste task after the computer tasks. Results: Consistent with their diet goals, participants with a strong tendency to restrain their food intake showed a relatively weak approach bias toward food when food was task-relevant (SRC) and this effect was independent of mood. Restrained eaters showed a relatively strong approach bias toward food when food was task-irrelevant in the positive condition and a relatively weak approach in the sad mood. Conclusion: The weak approach bias in contexts where food is task-relevant may help high-restrained eaters to comply with their diet goal. However, the strong approach bias in contexts where food is task-irrelevant and when being in a positive mood may interfere with restrained eaters’ goal of restricting food-intake. PMID:28443045

  13. Automatic image enhancement based on multi-scale image decomposition

    NASA Astrophysics Data System (ADS)

    Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong

    2014-01-01

    In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.

  14. Applying graph theory to protein structures: an atlas of coiled coils.

    PubMed

    Heal, Jack W; Bartlett, Gail J; Wood, Christopher W; Thomson, Andrew R; Woolfson, Derek N

    2018-05-02

    To understand protein structure, folding and function fully and to design proteins de novo reliably, we must learn from natural protein structures that have been characterised experimentally. The number of protein structures available is large and growing exponentially, which makes this task challenging. Indeed, computational resources are becoming increasingly important for classifying and analysing this resource. Here, we use tools from graph theory to define an atlas classification scheme for automatically categorising certain protein substructures. Focusing on the α-helical coiled coils, which are ubiquitous protein-structure and protein-protein interaction motifs, we present a suite of computational resources designed for analysing these assemblies. iSOCKET enables interactive analysis of side-chain packing within proteins to identify coiled coils automatically and with considerable user control. Applying a graph theory-based atlas classification scheme to structures identified by iSOCKET gives the Atlas of Coiled Coils, a fully automated, updated overview of extant coiled coils. The utility of this approach is illustrated with the first formal classification of an emerging subclass of coiled coils called α-helical barrels. Furthermore, in the Atlas, the known coiled-coil universe is presented alongside a partial enumeration of the 'dark matter' of coiled-coil structures; i.e., those coiled-coil architectures that are theoretically possible but have not been observed to date, and thus present defined targets for protein design. iSOCKET is available as part of the open-source GitHub repository associated with this work (https://github.com/woolfson-group/isocket). This repository also contains all the data generated when classifying the protein graphs. The Atlas of Coiled Coils is available at: http://coiledcoils.chm.bris.ac.uk/atlas/app.

  15. The calculation of aircraft collision probabilities

    DOT National Transportation Integrated Search

    1971-10-01

    The basic limitation of, air traffic compression, from the safety point of view, is the increased risk of collision due to reduced separations. In order to evolve new procedures, and eventually a fully, automatic system, it is desirable to have a mea...

  16. Grid generation and surface modeling for CFD

    NASA Technical Reports Server (NTRS)

    Connell, Stuart D.; Sober, Janet S.; Lamson, Scott H.

    1995-01-01

    When computing the flow around complex three dimensional configurations, the generation of the mesh is the most time consuming part of any calculation. With some meshing technologies this can take of the order of a man month or more. The requirement for a number of design iterations coupled with ever decreasing time allocated for design leads to the need for a significant acceleration of this process. Of the two competing approaches, block-structured and unstructured, only the unstructured approach will allow fully automatic mesh generation directly from a CAD model. Using this approach coupled with the techniques described in this paper, it is possible to reduce the mesh generation time from man months to a few hours on a workstation. The desire to closely couple a CFD code with a design or optimization algorithm requires that the changes to the geometry be performed quickly and in a smooth manner. This need for smoothness necessitates the use of Bezier polynomials in place of the more usual NURBS or cubic splines. A two dimensional Bezier polynomial based design system is described.

  17. Real-time people and vehicle detection from UAV imagery

    NASA Astrophysics Data System (ADS)

    Gaszczak, Anna; Breckon, Toby P.; Han, Jiwan

    2011-01-01

    A generic and robust approach for the real-time detection of people and vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present an approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery. Additionally we present a related approach for people detection in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching. The results presented show the successful detection of vehicle and people under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance of the detector is optimized to reduce the overall false positive rate by aiming at the detection of each object of interest (vehicle/person) at least once in the environment (i.e. per search patter flight path) rather than every object in each image frame. Currently the detection rate for people is ~70% and cars ~80% although the overall episodic object detection rate for each flight pattern exceeds 90%.

  18. A Fully Automated Drosophila Olfactory Classical Conditioning and Testing System for Behavioral Learning and Memory Assessment

    PubMed Central

    Jiang, Hui; Hanna, Eriny; Gatto, Cheryl L.; Page, Terry L.; Bhuva, Bharat; Broadie, Kendal

    2016-01-01

    Background Aversive olfactory classical conditioning has been the standard method to assess Drosophila learning and memory behavior for decades, yet training and testing are conducted manually under exceedingly labor-intensive conditions. To overcome this severe limitation, a fully automated, inexpensive system has been developed, which allows accurate and efficient Pavlovian associative learning/memory analyses for high-throughput pharmacological and genetic studies. New Method The automated system employs a linear actuator coupled to an odorant T-maze with airflow-mediated transfer of animals between training and testing stages. Odorant, airflow and electrical shock delivery are automatically administered and monitored during training trials. Control software allows operator-input variables to define parameters of Drosophila learning, short-term memory and long-term memory assays. Results The approach allows accurate learning/memory determinations with operational fail-safes. Automated learning indices (immediately post-training) and memory indices (after 24 hours) are comparable to traditional manual experiments, while minimizing experimenter involvement. Comparison with Existing Methods The automated system provides vast improvements over labor-intensive manual approaches with no experimenter involvement required during either training or testing phases. It provides quality control tracking of airflow rates, odorant delivery and electrical shock treatments, and an expanded platform for high-throughput studies of combinational drug tests and genetic screens. The design uses inexpensive hardware and software for a total cost of ~$500US, making it affordable to a wide range of investigators. Conclusions This study demonstrates the design, construction and testing of a fully automated Drosophila olfactory classical association apparatus to provide low-labor, high-fidelity, quality-monitored, high-throughput and inexpensive learning and memory behavioral assays. PMID:26703418

  19. A fully automated Drosophila olfactory classical conditioning and testing system for behavioral learning and memory assessment.

    PubMed

    Jiang, Hui; Hanna, Eriny; Gatto, Cheryl L; Page, Terry L; Bhuva, Bharat; Broadie, Kendal

    2016-03-01

    Aversive olfactory classical conditioning has been the standard method to assess Drosophila learning and memory behavior for decades, yet training and testing are conducted manually under exceedingly labor-intensive conditions. To overcome this severe limitation, a fully automated, inexpensive system has been developed, which allows accurate and efficient Pavlovian associative learning/memory analyses for high-throughput pharmacological and genetic studies. The automated system employs a linear actuator coupled to an odorant T-maze with airflow-mediated transfer of animals between training and testing stages. Odorant, airflow and electrical shock delivery are automatically administered and monitored during training trials. Control software allows operator-input variables to define parameters of Drosophila learning, short-term memory and long-term memory assays. The approach allows accurate learning/memory determinations with operational fail-safes. Automated learning indices (immediately post-training) and memory indices (after 24h) are comparable to traditional manual experiments, while minimizing experimenter involvement. The automated system provides vast improvements over labor-intensive manual approaches with no experimenter involvement required during either training or testing phases. It provides quality control tracking of airflow rates, odorant delivery and electrical shock treatments, and an expanded platform for high-throughput studies of combinational drug tests and genetic screens. The design uses inexpensive hardware and software for a total cost of ∼$500US, making it affordable to a wide range of investigators. This study demonstrates the design, construction and testing of a fully automated Drosophila olfactory classical association apparatus to provide low-labor, high-fidelity, quality-monitored, high-throughput and inexpensive learning and memory behavioral assays. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Tuned grid generation with ICEM CFD

    NASA Technical Reports Server (NTRS)

    Wulf, Armin; Akdag, Vedat

    1995-01-01

    ICEM CFD is a CAD based grid generation package that supports multiblock structured, unstructured tetrahedral and unstructured hexahedral grids. Major development efforts have been spent to extend ICEM CFD's multiblock structured and hexahedral unstructured grid generation capabilities. The modules added are: a parametric grid generation module and a semi-automatic hexahedral grid generation module. A fully automatic version of the hexahedral grid generation module for around a set of predefined objects in rectilinear enclosures has been developed. These modules will be presented and the procedures used will be described, and examples will be discussed.

  1. Automatic laser beam alignment using blob detection for an environment monitoring spectroscopy

    NASA Astrophysics Data System (ADS)

    Khidir, Jarjees; Chen, Youhua; Anderson, Gary

    2013-05-01

    This paper describes a fully automated system to align an infra-red laser beam with a small retro-reflector over a wide range of distances. The component development and test were especially used for an open-path spectrometer gas detection system. Using blob detection under OpenCV library, an automatic alignment algorithm was designed to achieve fast and accurate target detection in a complex background environment. Test results are presented to show that the proposed algorithm has been successfully applied to various target distances and environment conditions.

  2. Computer-aided liver volumetry: performance of a fully-automated, prototype post-processing solution for whole-organ and lobar segmentation based on MDCT imaging.

    PubMed

    Fananapazir, Ghaneh; Bashir, Mustafa R; Marin, Daniele; Boll, Daniel T

    2015-06-01

    To evaluate the performance of a prototype, fully-automated post-processing solution for whole-liver and lobar segmentation based on MDCT datasets. A polymer liver phantom was used to assess accuracy of post-processing applications comparing phantom volumes determined via Archimedes' principle with MDCT segmented datasets. For the IRB-approved, HIPAA-compliant study, 25 patients were enrolled. Volumetry performance compared the manual approach with the automated prototype, assessing intraobserver variability, and interclass correlation for whole-organ and lobar segmentation using ANOVA comparison. Fidelity of segmentation was evaluated qualitatively. Phantom volume was 1581.0 ± 44.7 mL, manually segmented datasets estimated 1628.0 ± 47.8 mL, representing a mean overestimation of 3.0%, automatically segmented datasets estimated 1601.9 ± 0 mL, representing a mean overestimation of 1.3%. Whole-liver and segmental volumetry demonstrated no significant intraobserver variability for neither manual nor automated measurements. For whole-liver volumetry, automated measurement repetitions resulted in identical values; reproducible whole-organ volumetry was also achieved with manual segmentation, p(ANOVA) 0.98. For lobar volumetry, automated segmentation improved reproducibility over manual approach, without significant measurement differences for either methodology, p(ANOVA) 0.95-0.99. Whole-organ and lobar segmentation results from manual and automated segmentation showed no significant differences, p(ANOVA) 0.96-1.00. Assessment of segmentation fidelity found that segments I-IV/VI showed greater segmentation inaccuracies compared to the remaining right hepatic lobe segments. Automated whole-liver segmentation showed non-inferiority of fully-automated whole-liver segmentation compared to manual approaches with improved reproducibility and post-processing duration; automated dual-seed lobar segmentation showed slight tendencies for underestimating the right hepatic lobe volume and greater variability in edge detection for the left hepatic lobe compared to manual segmentation.

  3. High-throughput characterization of film thickness in thin film materials libraries by digital holographic microscopy.

    PubMed

    Lai, Yiu Wai; Krause, Michael; Savan, Alan; Thienhaus, Sigurd; Koukourakis, Nektarios; Hofmann, Martin R; Ludwig, Alfred

    2011-10-01

    A high-throughput characterization technique based on digital holography for mapping film thickness in thin-film materials libraries was developed. Digital holographic microscopy is used for fully automatic measurements of the thickness of patterned films with nanometer resolution. The method has several significant advantages over conventional stylus profilometry: it is contactless and fast, substrate bending is compensated, and the experimental setup is simple. Patterned films prepared by different combinatorial thin-film approaches were characterized to investigate and demonstrate this method. The results show that this technique is valuable for the quick, reliable and high-throughput determination of the film thickness distribution in combinatorial materials research. Importantly, it can also be applied to thin films that have been structured by shadow masking.

  4. FreeSurfer-initiated fully-automated subcortical brain segmentation in MRI using Large Deformation Diffeomorphic Metric Mapping.

    PubMed

    Khan, Ali R; Wang, Lei; Beg, Mirza Faisal

    2008-07-01

    Fully-automated brain segmentation methods have not been widely adopted for clinical use because of issues related to reliability, accuracy, and limitations of delineation protocol. By combining the probabilistic-based FreeSurfer (FS) method with the Large Deformation Diffeomorphic Metric Mapping (LDDMM)-based label-propagation method, we are able to increase reliability and accuracy, and allow for flexibility in template choice. Our method uses the automated FreeSurfer subcortical labeling to provide a coarse-to-fine introduction of information in the LDDMM template-based segmentation resulting in a fully-automated subcortical brain segmentation method (FS+LDDMM). One major advantage of the FS+LDDMM-based approach is that the automatically generated segmentations generated are inherently smooth, thus subsequent steps in shape analysis can directly follow without manual post-processing or loss of detail. We have evaluated our new FS+LDDMM method on several databases containing a total of 50 subjects with different pathologies, scan sequences and manual delineation protocols for labeling the basal ganglia, thalamus, and hippocampus. In healthy controls we report Dice overlap measures of 0.81, 0.83, 0.74, 0.86 and 0.75 for the right caudate nucleus, putamen, pallidum, thalamus and hippocampus respectively. We also find statistically significant improvement of accuracy in FS+LDDMM over FreeSurfer for the caudate nucleus and putamen of Huntington's disease and Tourette's syndrome subjects, and the right hippocampus of Schizophrenia subjects.

  5. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.

  6. Automatic Avoidance Tendencies for Alcohol Cues Predict Drinking After Detoxification Treatment in Alcohol Dependence

    PubMed Central

    2016-01-01

    Alcohol dependence is characterized by conflict between approach and avoidance motivational orientations for alcohol that operate in automatic and controlled processes. This article describes the first study to investigate the predictive validity of these motivational orientations for relapse to drinking after discharge from alcohol detoxification treatment in alcohol-dependent patients. One hundred twenty alcohol-dependent patients who were nearing the end of inpatient detoxification treatment completed measures of self-reported (Approach and Avoidance of Alcohol Questionnaire; AAAQ) and automatic (modified Stimulus-Response Compatibility task) approach and avoidance motivational orientations for alcohol. Their drinking behavior was assessed via telephone follow-ups at 2, 4, and 6 months after discharge from treatment. Results indicated that, after controlling for the severity of alcohol dependence, strong automatic avoidance tendencies for alcohol cues were predictive of higher percentage of heavy drinking days (PHDD) at 4-month (β = 0.22, 95% CI [0.07, 0.43]) and 6-month (β = 0.22, 95% CI [0.01, 0.42]) follow-ups. We failed to replicate previous demonstrations of the predictive validity of approach subscales of the AAAQ for relapse to drinking, and there were no significant predictors of PHDD at 2-month follow-up. In conclusion, strong automatic avoidance tendencies predicted relapse to drinking after inpatient detoxification treatment, but automatic approach tendencies and self-reported approach and avoidance tendencies were not predictive in this study. Our results extend previous findings and help to resolve ambiguities with earlier studies that investigated the roles of automatic and controlled cognitive processes in recovery from alcohol dependence. PMID:27935726

  7. 3D image processing architecture for camera phones

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje

    2011-03-01

    Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.

  8. Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models

    NASA Astrophysics Data System (ADS)

    Candy, Adam S.; Pietrzak, Julie D.

    2018-01-01

    The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.

  9. Auditing SNOMED CT hierarchical relations based on lexical features of concepts in non-lattice subgraphs.

    PubMed

    Cui, Licong; Bodenreider, Olivier; Shi, Jay; Zhang, Guo-Qiang

    2018-02-01

    We introduce a structural-lexical approach for auditing SNOMED CT using a combination of non-lattice subgraphs of the underlying hierarchical relations and enriched lexical attributes of fully specified concept names. Our goal is to develop a scalable and effective approach that automatically identifies missing hierarchical IS-A relations. Our approach involves 3 stages. In stage 1, all non-lattice subgraphs of SNOMED CT's IS-A hierarchical relations are extracted. In stage 2, lexical attributes of fully-specified concept names in such non-lattice subgraphs are extracted. For each concept in a non-lattice subgraph, we enrich its set of attributes with attributes from its ancestor concepts within the non-lattice subgraph. In stage 3, subset inclusion relations between the lexical attribute sets of each pair of concepts in each non-lattice subgraph are compared to existing IS-A relations in SNOMED CT. For concept pairs within each non-lattice subgraph, if a subset relation is identified but an IS-A relation is not present in SNOMED CT IS-A transitive closure, then a missing IS-A relation is reported. The September 2017 release of SNOMED CT (US edition) was used in this investigation. A total of 14,380 non-lattice subgraphs were extracted, from which we suggested a total of 41,357 missing IS-A relations. For evaluation purposes, 200 non-lattice subgraphs were randomly selected from 996 smaller subgraphs (of size 4, 5, or 6) within the "Clinical Finding" and "Procedure" sub-hierarchies. Two domain experts confirmed 185 (among 223) suggested missing IS-A relations, a precision of 82.96%. Our results demonstrate that analyzing the lexical features of concepts in non-lattice subgraphs is an effective approach for auditing SNOMED CT. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. A novel fully automatic multilevel thresholding technique based on optimized intuitionistic fuzzy sets and tsallis entropy for MR brain tumor image segmentation.

    PubMed

    Kaur, Taranjit; Saini, Barjinder Singh; Gupta, Savita

    2018-03-01

    In the present paper, a hybrid multilevel thresholding technique that combines intuitionistic fuzzy sets and tsallis entropy has been proposed for the automatic delineation of the tumor from magnetic resonance images having vague boundaries and poor contrast. This novel technique takes into account both the image histogram and the uncertainty information for the computation of multiple thresholds. The benefit of the methodology is that it provides fast and improved segmentation for the complex tumorous images with imprecise gray levels. To further boost the computational speed, the mutation based particle swarm optimization is used that selects the most optimal threshold combination. The accuracy of the proposed segmentation approach has been validated on simulated, real low-grade glioma tumor volumes taken from MICCAI brain tumor segmentation (BRATS) challenge 2012 dataset and the clinical tumor images, so as to corroborate its generality and novelty. The designed technique achieves an average Dice overlap equal to 0.82010, 0.78610 and 0.94170 for three datasets. Further, a comparative analysis has also been made between the eight existing multilevel thresholding implementations so as to show the superiority of the designed technique. In comparison, the results indicate a mean improvement in Dice by an amount equal to 4.00% (p < 0.005), 9.60% (p < 0.005) and 3.58% (p < 0.005), respectively in contrast to the fuzzy tsallis approach.

  11. Automated structure solution, density modification and model building.

    PubMed

    Terwilliger, Thomas C

    2002-11-01

    The approaches that form the basis of automated structure solution in SOLVE and RESOLVE are described. The use of a scoring scheme to convert decision making in macromolecular structure solution to an optimization problem has proven very useful and in many cases a single clear heavy-atom solution can be obtained and used for phasing. Statistical density modification is well suited to an automated approach to structure solution because the method is relatively insensitive to choices of numbers of cycles and solvent content. The detection of non-crystallographic symmetry (NCS) in heavy-atom sites and checking of potential NCS operations against the electron-density map has proven to be a reliable method for identification of NCS in most cases. Automated model building beginning with an FFT-based search for helices and sheets has been successful in automated model building for maps with resolutions as low as 3 A. The entire process can be carried out in a fully automatic fashion in many cases.

  12. Automatic intraaortic balloon pump timing using an intrabeat dicrotic notch prediction algorithm.

    PubMed

    Schreuder, Jan J; Castiglioni, Alessandro; Donelli, Andrea; Maisano, Francesco; Jansen, Jos R C; Hanania, Ramzi; Hanlon, Pat; Bovelander, Jan; Alfieri, Ottavio

    2005-03-01

    The efficacy of intraaortic balloon counterpulsation (IABP) during arrhythmic episodes is questionable. A novel algorithm for intrabeat prediction of the dicrotic notch was used for real time IABP inflation timing control. A windkessel model algorithm was used to calculate real-time aortic flow from aortic pressure. The dicrotic notch was predicted using a percentage of calculated peak flow. Automatic inflation timing was set at intrabeat predicted dicrotic notch and was combined with automatic IAB deflation. Prophylactic IABP was applied in 27 patients with low ejection fraction (< 35%) undergoing cardiac surgery. Analysis of IABP at a 1:4 ratio revealed that IAB inflation occurred at a mean of 0.6 +/- 5 ms from the dicrotic notch. In all patients accurate automatic timing at a 1:1 assist ratio was performed. Seventeen patients had episodes of severe arrhythmia, the novel IABP inflation algorithm accurately assisted 318 of 320 arrhythmic beats at a 1:1 ratio. The novel real-time intrabeat IABP inflation timing algorithm performed accurately in all patients during both regular rhythms and severe arrhythmia, allowing fully automatic intrabeat IABP timing.

  13. Fully automatic hp-adaptivity for acoustic and electromagnetic scattering in three dimensions

    NASA Astrophysics Data System (ADS)

    Kurtz, Jason Patrick

    We present an algorithm for fully automatic hp-adaptivity for finite element approximations of elliptic and Maxwell boundary value problems in three dimensions. The algorithm automatically generates a sequence of coarse grids, and a corresponding sequence of fine grids, such that the energy norm of the error decreases exponentially with respect to the number of degrees of freedom in either sequence. At each step, we employ a discrete optimization algorithm to determine the refinements for the current coarse grid such that the projection-based interpolation error for the current fine grid solution decreases with an optimal rate with respect to the number of degrees of freedom added by the refinement. The refinements are restricted only by the requirement that the resulting mesh is at most 1-irregular, but they may be anisotropic in both element size h and order of approximation p. While we cannot prove that our method converges at all, we present numerical evidence of exponential convergence for a diverse suite of model problems from acoustic and electromagnetic scattering. In particular we show that our method is well suited to the automatic resolution of exterior problems truncated by the introduction of a perfectly matched layer. To enable and accelerate the solution of these problems on commodity hardware, we include a detailed account of three critical aspects of our implementation, namely an efficient implementation of sum factorization, several efficient interfaces to the direct multi-frontal solver MUMPS, and some fast direct solvers for the computation of a sequence of nested projections.

  14. Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.

    2010-04-01

    The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.

  15. Semiautomatic and Automatic Cooperative Inversion of Seismic and Magnetotelluric Data

    NASA Astrophysics Data System (ADS)

    Le, Cuong V. A.; Harris, Brett D.; Pethick, Andrew M.; Takam Takougang, Eric M.; Howe, Brendan

    2016-09-01

    Natural source electromagnetic methods have the potential to recover rock property distributions from the surface to great depths. Unfortunately, results in complex 3D geo-electrical settings can be disappointing, especially where significant near-surface conductivity variations exist. In such settings, unconstrained inversion of magnetotelluric data is inexorably non-unique. We believe that: (1) correctly introduced information from seismic reflection can substantially improve MT inversion, (2) a cooperative inversion approach can be automated, and (3) massively parallel computing can make such a process viable. Nine inversion strategies including baseline unconstrained inversion and new automated/semiautomated cooperative inversion approaches are applied to industry-scale co-located 3D seismic and magnetotelluric data sets. These data sets were acquired in one of the Carlin gold deposit districts in north-central Nevada, USA. In our approach, seismic information feeds directly into the creation of sets of prior conductivity model and covariance coefficient distributions. We demonstrate how statistical analysis of the distribution of selected seismic attributes can be used to automatically extract subvolumes that form the framework for prior model 3D conductivity distribution. Our cooperative inversion strategies result in detailed subsurface conductivity distributions that are consistent with seismic, electrical logs and geochemical analysis of cores. Such 3D conductivity distributions would be expected to provide clues to 3D velocity structures that could feed back into full seismic inversion for an iterative practical and truly cooperative inversion process. We anticipate that, with the aid of parallel computing, cooperative inversion of seismic and magnetotelluric data can be fully automated, and we hold confidence that significant and practical advances in this direction have been accomplished.

  16. 3D marker-controlled watershed for kidney segmentation in clinical CT exams.

    PubMed

    Wieclawek, Wojciech

    2018-02-27

    Image segmentation is an essential and non trivial task in computer vision and medical image analysis. Computed tomography (CT) is one of the most accessible medical examination techniques to visualize the interior of a patient's body. Among different computer-aided diagnostic systems, the applications dedicated to kidney segmentation represent a relatively small group. In addition, literature solutions are verified on relatively small databases. The goal of this research is to develop a novel algorithm for fully automated kidney segmentation. This approach is designed for large database analysis including both physiological and pathological cases. This study presents a 3D marker-controlled watershed transform developed and employed for fully automated CT kidney segmentation. The original and the most complex step in the current proposition is an automatic generation of 3D marker images. The final kidney segmentation step is an analysis of the labelled image obtained from marker-controlled watershed transform. It consists of morphological operations and shape analysis. The implementation is conducted in a MATLAB environment, Version 2017a, using i.a. Image Processing Toolbox. 170 clinical CT abdominal studies have been subjected to the analysis. The dataset includes normal as well as various pathological cases (agenesis, renal cysts, tumors, renal cell carcinoma, kidney cirrhosis, partial or radical nephrectomy, hematoma and nephrolithiasis). Manual and semi-automated delineations have been used as a gold standard. Wieclawek Among 67 delineated medical cases, 62 cases are 'Very good', whereas only 5 are 'Good' according to Cohen's Kappa interpretation. The segmentation results show that mean values of Sensitivity, Specificity, Dice, Jaccard, Cohen's Kappa and Accuracy are 90.29, 99.96, 91.68, 85.04, 91.62 and 99.89% respectively. All 170 medical cases (with and without outlines) have been classified by three independent medical experts as 'Very good' in 143-148 cases, as 'Good' in 15-21 cases and as 'Moderate' in 6-8 cases. An automatic kidney segmentation approach for CT studies to compete with commonly known solutions was developed. The algorithm gives promising results, that were confirmed during validation procedure done on a relatively large database, including 170 CTs with both physiological and pathological cases.

  17. Automatic Abstraction in Planning

    NASA Technical Reports Server (NTRS)

    Christensen, J.

    1991-01-01

    Traditionally, abstraction in planning has been accomplished by either state abstraction or operator abstraction, neither of which has been fully automatic. We present a new method, predicate relaxation, for automatically performing state abstraction. PABLO, a nonlinear hierarchical planner, implements predicate relaxation. Theoretical, as well as empirical results are presented which demonstrate the potential advantages of using predicate relaxation in planning. We also present a new definition of hierarchical operators that allows us to guarantee a limited form of completeness. This new definition is shown to be, in some ways, more flexible than previous definitions of hierarchical operators. Finally, a Classical Truth Criterion is presented that is proven to be sound and complete for a planning formalism that is general enough to include most classical planning formalisms that are based on the STRIPS assumption.

  18. A fast and automatic mosaic method for high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  19. Solar-Powered Water Distillation

    NASA Technical Reports Server (NTRS)

    Menninger, F. J.; Elder, R. J.

    1985-01-01

    Solar-powered still produces pure water at rate of 6,000 gallons per year. Still fully automatic and gravity-fed. Only outside electric power is timer clock and solenoid-operated valve. Still saves $5,000 yearly in energy costs and pays for itself in 3 1/2 years.

  20. AutoBayes Program Synthesis System Users Manual

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd

    2008-01-01

    Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.

  1. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  2. Automatic avoidance tendencies for alcohol cues predict drinking after detoxification treatment in alcohol dependence.

    PubMed

    Field, Matt; Di Lemma, Lisa; Christiansen, Paul; Dickson, Joanne

    2017-03-01

    Alcohol dependence is characterized by conflict between approach and avoidance motivational orientations for alcohol that operate in automatic and controlled processes. This article describes the first study to investigate the predictive validity of these motivational orientations for relapse to drinking after discharge from alcohol detoxification treatment in alcohol-dependent patients. One hundred twenty alcohol-dependent patients who were nearing the end of inpatient detoxification treatment completed measures of self-reported (Approach and Avoidance of Alcohol Questionnaire; AAAQ) and automatic (modified Stimulus-Response Compatibility task) approach and avoidance motivational orientations for alcohol. Their drinking behavior was assessed via telephone follow-ups at 2, 4, and 6 months after discharge from treatment. Results indicated that, after controlling for the severity of alcohol dependence, strong automatic avoidance tendencies for alcohol cues were predictive of higher percentage of heavy drinking days (PHDD) at 4-month (β = 0.22, 95% CI [0.07, 0.43]) and 6-month (β = 0.22, 95% CI [0.01, 0.42]) follow-ups. We failed to replicate previous demonstrations of the predictive validity of approach subscales of the AAAQ for relapse to drinking, and there were no significant predictors of PHDD at 2-month follow-up. In conclusion, strong automatic avoidance tendencies predicted relapse to drinking after inpatient detoxification treatment, but automatic approach tendencies and self-reported approach and avoidance tendencies were not predictive in this study. Our results extend previous findings and help to resolve ambiguities with earlier studies that investigated the roles of automatic and controlled cognitive processes in recovery from alcohol dependence. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Fully automatic, multiorgan segmentation in normal whole body magnetic resonance imaging (MRI), using classification forests (CFs), convolutional neural networks (CNNs), and a multi-atlas (MA) approach.

    PubMed

    Lavdas, Ioannis; Glocker, Ben; Kamnitsas, Konstantinos; Rueckert, Daniel; Mair, Henrietta; Sandhu, Amandeep; Taylor, Stuart A; Aboagye, Eric O; Rockall, Andrea G

    2017-10-01

    As part of a program to implement automatic lesion detection methods for whole body magnetic resonance imaging (MRI) in oncology, we have developed, evaluated, and compared three algorithms for fully automatic, multiorgan segmentation in healthy volunteers. The first algorithm is based on classification forests (CFs), the second is based on 3D convolutional neural networks (CNNs) and the third algorithm is based on a multi-atlas (MA) approach. We examined data from 51 healthy volunteers, scanned prospectively with a standardized, multiparametric whole body MRI protocol at 1.5 T. The study was approved by the local ethics committee and written consent was obtained from the participants. MRI data were used as input data to the algorithms, while training was based on manual annotation of the anatomies of interest by clinical MRI experts. Fivefold cross-validation experiments were run on 34 artifact-free subjects. We report three overlap and three surface distance metrics to evaluate the agreement between the automatic and manual segmentations, namely the dice similarity coefficient (DSC), recall (RE), precision (PR), average surface distance (ASD), root-mean-square surface distance (RMSSD), and Hausdorff distance (HD). Analysis of variances was used to compare pooled label metrics between the three algorithms and the DSC on a 'per-organ' basis. A Mann-Whitney U test was used to compare the pooled metrics between CFs and CNNs and the DSC on a 'per-organ' basis, when using different imaging combinations as input for training. All three algorithms resulted in robust segmenters that were effectively trained using a relatively small number of datasets, an important consideration in the clinical setting. Mean overlap metrics for all the segmented structures were: CFs: DSC = 0.70 ± 0.18, RE = 0.73 ± 0.18, PR = 0.71 ± 0.14, CNNs: DSC = 0.81 ± 0.13, RE = 0.83 ± 0.14, PR = 0.82 ± 0.10, MA: DSC = 0.71 ± 0.22, RE = 0.70 ± 0.34, PR = 0.77 ± 0.15. Mean surface distance metrics for all the segmented structures were: CFs: ASD = 13.5 ± 11.3 mm, RMSSD = 34.6 ± 37.6 mm and HD = 185.7 ± 194.0 mm, CNNs; ASD = 5.48 ± 4.84 mm, RMSSD = 17.0 ± 13.3 mm and HD = 199.0 ± 101.2 mm, MA: ASD = 4.22 ± 2.42 mm, RMSSD = 6.13 ± 2.55 mm, and HD = 38.9 ± 28.9 mm. The pooled performance of CFs improved when all imaging combinations (T2w + T1w + DWI) were used as input, while the performance of CNNs deteriorated, but in neither case, significantly. CNNs with T2w images as input, performed significantly better than CFs with all imaging combinations as input for all anatomical labels, except for the bladder. Three state-of-the-art algorithms were developed and used to automatically segment major organs and bones in whole body MRI; good agreement to manual segmentations performed by clinical MRI experts was observed. CNNs perform favorably, when using T2w volumes as input. Using multimodal MRI data as input to CNNs did not improve the segmentation performance. © 2017 American Association of Physicists in Medicine.

  4. Impact of translation on named-entity recognition in radiology texts

    PubMed Central

    Pedro, Vasco

    2017-01-01

    Abstract Radiology reports describe the results of radiography procedures and have the potential of being a useful source of information which can bring benefits to health care systems around the world. One way to automatically extract information from the reports is by using Text Mining tools. The problem is that these tools are mostly developed for English and reports are usually written in the native language of the radiologist, which is not necessarily English. This creates an obstacle to the sharing of Radiology information between different communities. This work explores the solution of translating the reports to English before applying the Text Mining tools, probing the question of what translation approach should be used. We created MRRAD (Multilingual Radiology Research Articles Dataset), a parallel corpus of Portuguese research articles related to Radiology and a number of alternative translations (human, automatic and semi-automatic) to English. This is a novel corpus which can be used to move forward the research on this topic. Using MRRAD we studied which kind of automatic or semi-automatic translation approach is more effective on the Named-entity recognition task of finding RadLex terms in the English version of the articles. Considering the terms extracted from human translations as our gold standard, we calculated how similar to this standard were the terms extracted using other translations. We found that a completely automatic translation approach using Google leads to F-scores (between 0.861 and 0.868, depending on the extraction approach) similar to the ones obtained through a more expensive semi-automatic translation approach using Unbabel (between 0.862 and 0.870). To better understand the results we also performed a qualitative analysis of the type of errors found in the automatic and semi-automatic translations. Database URL: https://github.com/lasigeBioTM/MRRAD PMID:29220455

  5. Visual vs Fully Automatic Histogram-Based Assessment of Idiopathic Pulmonary Fibrosis (IPF) Progression Using Sequential Multidetector Computed Tomography (MDCT)

    PubMed Central

    Colombi, Davide; Dinkel, Julien; Weinheimer, Oliver; Obermayer, Berenike; Buzan, Teodora; Nabers, Diana; Bauer, Claudia; Oltmanns, Ute; Palmowski, Karin; Herth, Felix; Kauczor, Hans Ulrich; Sverzellati, Nicola

    2015-01-01

    Objectives To describe changes over time in extent of idiopathic pulmonary fibrosis (IPF) at multidetector computed tomography (MDCT) assessed by semi-quantitative visual scores (VSs) and fully automatic histogram-based quantitative evaluation and to test the relationship between these two methods of quantification. Methods Forty IPF patients (median age: 70 y, interquartile: 62-75 years; M:F, 33:7) that underwent 2 MDCT at different time points with a median interval of 13 months (interquartile: 10-17 months) were retrospectively evaluated. In-house software YACTA quantified automatically lung density histogram (10th-90th percentile in 5th percentile steps). Longitudinal changes in VSs and in the percentiles of attenuation histogram were obtained in 20 untreated patients and 20 patients treated with pirfenidone. Pearson correlation analysis was used to test the relationship between VSs and selected percentiles. Results In follow-up MDCT, visual overall extent of parenchymal abnormalities (OE) increased in median by 5 %/year (interquartile: 0 %/y; +11 %/y). Substantial difference was found between treated and untreated patients in HU changes of the 40th and of the 80th percentiles of density histogram. Correlation analysis between VSs and selected percentiles showed higher correlation between the changes (Δ) in OE and Δ 40th percentile (r=0.69; p<0.001) as compared to Δ 80th percentile (r=0.58; p<0.001); closer correlation was found between Δ ground-glass extent and Δ 40th percentile (r=0.66, p<0.001) as compared to Δ 80th percentile (r=0.47, p=0.002), while the Δ reticulations correlated better with the Δ 80th percentile (r=0.56, p<0.001) in comparison to Δ 40th percentile (r=0.43, p=0.003). Conclusions There is a relevant and fully automatically measurable difference at MDCT in VSs and in histogram analysis at one year follow-up of IPF patients, whether treated or untreated: Δ 40th percentile might reflect the change in overall extent of lung abnormalities, notably of ground-glass pattern; furthermore Δ 80th percentile might reveal the course of reticular opacities. PMID:26110421

  6. Aerial applications dispersal systems control requirements study. [agriculture

    NASA Technical Reports Server (NTRS)

    Bauchspies, J. S.; Cleary, W. L.; Rogers, W. F.; Simpson, W.; Sanders, G. S.

    1980-01-01

    Performance deficiencies in aerial liquid and dry dispersal systems are identified. Five control system concepts are explored: (1) end of field on/off control; (2) manual control of particle size and application rate from the aircraft; (3) manual control of deposit rate on the field; (4) automatic alarm and shut-off control; and (5) fully automatic control. Operational aspects of the concepts and specifications for improved control configurations are discussed in detail. A research plan to provide the technology needed to develop the proposed improvements is presented along with a flight program to verify the benefits achieved.

  7. The design of digital-adaptive controllers for VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Broussard, J. R.; Berry, P. W.

    1976-01-01

    Design procedures for VTOL automatic control systems have been developed and are presented. Using linear-optimal estimation and control techniques as a starting point, digital-adaptive control laws have been designed for the VALT Research Aircraft, a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. These control laws are designed to interface with velocity-command and attitude-command guidance logic, which could be used in short-haul VTOL operations. Developments reported here include new algorithms for designing non-zero-set-point digital regulators, design procedures for rate-limited systems, and algorithms for dynamic control trim setting.

  8. Job expansion : an additional benefit of a computer aided dispatch/automatic vehicle locator (CAD/AVL) system

    DOT National Transportation Integrated Search

    2000-03-01

    The Denver Regional Transportation District (RTD) acquired a CAD/AVL system that became fully operational in 1996. The CAD/AVL system added radio channels and covert alarms in buses, located vehicles in real time, and monitored schedule adherence. Th...

  9. Semi-automatic brain tumor segmentation by constrained MRFs using structural trajectories.

    PubMed

    Zhao, Liang; Wu, Wei; Corso, Jason J

    2013-01-01

    Quantifying volume and growth of a brain tumor is a primary prognostic measure and hence has received much attention in the medical imaging community. Most methods have sought a fully automatic segmentation, but the variability in shape and appearance of brain tumor has limited their success and further adoption in the clinic. In reaction, we present a semi-automatic brain tumor segmentation framework for multi-channel magnetic resonance (MR) images. This framework does not require prior model construction and only requires manual labels on one automatically selected slice. All other slices are labeled by an iterative multi-label Markov random field optimization with hard constraints. Structural trajectories-the medical image analog to optical flow and 3D image over-segmentation are used to capture pixel correspondences between consecutive slices for pixel labeling. We show robustness and effectiveness through an evaluation on the 2012 MICCAI BRATS Challenge Dataset; our results indicate superior performance to baselines and demonstrate the utility of the constrained MRF formulation.

  10. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    USGS Publications Warehouse

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  11. An automatic multi-atlas prostate segmentation in MRI using a multiscale representation and a label fusion strategy

    NASA Astrophysics Data System (ADS)

    Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo

    2015-01-01

    The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.

  12. Automatic digital image analysis for identification of mitotic cells in synchronous mammalian cell cultures.

    PubMed

    Eccles, B A; Klevecz, R R

    1986-06-01

    Mitotic frequency in a synchronous culture of mammalian cells was determined fully automatically and in real time using low-intensity phase-contrast microscopy and a newvicon video camera connected to an EyeCom III image processor. Image samples, at a frequency of one per minute for 50 hours, were analyzed by first extracting the high-frequency picture components, then thresholding and probing for annular objects indicative of putative mitotic cells. Both the extraction of high-frequency components and the recognition of rings of varying radii and discontinuities employed novel algorithms. Spatial and temporal relationships between annuli were examined to discern the occurrences of mitoses, and such events were recorded in a computer data file. At present, the automatic analysis is suited for random cell proliferation rate measurements or cell cycle studies. The automatic identification of mitotic cells as described here provides a measure of the average proliferative activity of the cell population as a whole and eliminates more than eight hours of manual review per time-lapse video recording.

  13. Automatic estimation of extent of resection and residual tumor volume of patients with glioblastoma.

    PubMed

    Meier, Raphael; Porz, Nicole; Knecht, Urspeter; Loosli, Tina; Schucht, Philippe; Beck, Jürgen; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2017-10-01

    OBJECTIVE In the treatment of glioblastoma, residual tumor burden is the only prognostic factor that can be actively influenced by therapy. Therefore, an accurate, reproducible, and objective measurement of residual tumor burden is necessary. This study aimed to evaluate the use of a fully automatic segmentation method-brain tumor image analysis (BraTumIA)-for estimating the extent of resection (EOR) and residual tumor volume (RTV) of contrast-enhancing tumor after surgery. METHODS The imaging data of 19 patients who underwent primary resection of histologically confirmed supratentorial glioblastoma were retrospectively reviewed. Contrast-enhancing tumors apparent on structural preoperative and immediate postoperative MR imaging in this patient cohort were segmented by 4 different raters and the automatic segmentation BraTumIA software. The manual and automatic results were quantitatively compared. RESULTS First, the interrater variabilities in the estimates of EOR and RTV were assessed for all human raters. Interrater agreement in terms of the coefficient of concordance (W) was higher for RTV (W = 0.812; p < 0.001) than for EOR (W = 0.775; p < 0.001). Second, the volumetric estimates of BraTumIA for all 19 patients were compared with the estimates of the human raters, which showed that for both EOR (W = 0.713; p < 0.001) and RTV (W = 0.693; p < 0.001) the estimates of BraTumIA were generally located close to or between the estimates of the human raters. No statistically significant differences were detected between the manual and automatic estimates. BraTumIA showed a tendency to overestimate contrast-enhancing tumors, leading to moderate agreement with expert raters with respect to the literature-based, survival-relevant threshold values for EOR. CONCLUSIONS BraTumIA can generate volumetric estimates of EOR and RTV, in a fully automatic fashion, which are comparable to the estimates of human experts. However, automated analysis showed a tendency to overestimate the volume of a contrast-enhancing tumor, whereas manual analysis is prone to subjectivity, thereby causing considerable interrater variability.

  14. "Ask Ernö": a self-learning tool for assignment and prediction of nuclear magnetic resonance spectra.

    PubMed

    Castillo, Andrés M; Bernal, Andrés; Dieden, Reiner; Patiny, Luc; Wist, Julien

    2016-01-01

    We present "Ask Ernö", a self-learning system for the automatic analysis of NMR spectra, consisting of integrated chemical shift assignment and prediction tools. The output of the automatic assignment component initializes and improves a database of assigned protons that is used by the chemical shift predictor. In turn, the predictions provided by the latter facilitate improvement of the assignment process. Iteration on these steps allows Ask Ernö to improve its ability to assign and predict spectra without any prior knowledge or assistance from human experts. This concept was tested by training such a system with a dataset of 2341 molecules and their (1)H-NMR spectra, and evaluating the accuracy of chemical shift predictions on a test set of 298 partially assigned molecules (2007 assigned protons). After 10 iterations, Ask Ernö was able to decrease its prediction error by 17 %, reaching an average error of 0.265 ppm. Over 60 % of the test chemical shifts were predicted within 0.2 ppm, while only 5 % still presented a prediction error of more than 1 ppm. Ask Ernö introduces an innovative approach to automatic NMR analysis that constantly learns and improves when provided with new data. Furthermore, it completely avoids the need for manually assigned spectra. This system has the potential to be turned into a fully autonomous tool able to compete with the best alternatives currently available.Graphical abstractSelf-learning loop. Any progress in the prediction (forward problem) will improve the assignment ability (reverse problem) and vice versa.

  15. Reproducing the internal and external anatomy of fossil bones: Two new automatic digital tools.

    PubMed

    Profico, Antonio; Schlager, Stefan; Valoriani, Veronica; Buzi, Costantino; Melchionna, Marina; Veneziano, Alessio; Raia, Pasquale; Moggi-Cecchi, Jacopo; Manzi, Giorgio

    2018-04-21

    We present two new automatic tools, developed under the R environment, to reproduce the internal and external structures of bony elements. The first method, Computer-Aided Laser Scanner Emulator (CA-LSE), provides the reconstruction of the external portions of a 3D mesh by simulating the action of a laser scanner. The second method, Automatic Segmentation Tool for 3D objects (AST-3D), performs the digital reconstruction of anatomical cavities. We present the application of CA-LSE and AST-3D methods to different anatomical remains, highly variable in terms of shape, size and structure: a modern human skull, a malleus bone, and a Neanderthal deciduous tooth. Both methods are developed in the R environment and embedded in the packages "Arothron" and "Morpho," where both the codes and the data are fully available. The application of CA-LSE and AST-3D allows the isolation and manipulation of the internal and external components of the 3D virtual representation of complex bony elements. In particular, we present the output of the four case studies: a complete modern human endocast and the right maxillary sinus, the dental pulp of the Neanderthal tooth and the inner network of blood vessels of the malleus. Both methods demonstrated to be much faster, cheaper, and more accurate than other conventional approaches. The tools we presented are available as add-ons in existing software within the R platform. Because of ease of application, and unrestrained availability of the methods proposed, these tools can be widely used by paleoanthropologists, paleontologists and anatomists. © 2018 Wiley Periodicals, Inc.

  16. Computer-based planning of optimal donor sites for autologous osseous grafts

    NASA Astrophysics Data System (ADS)

    Krol, Zdzislaw; Chlebiej, Michal; Zerfass, Peter; Zeilhofer, Hans-Florian U.; Sader, Robert; Mikolajczak, Pawel; Keeve, Erwin

    2002-05-01

    Bone graft surgery is often necessary for reconstruction of craniofacial defects after trauma, tumor, infection or congenital malformation. In this operative technique the removed or missing bone segment is filled with a bone graft. The mainstay of the craniofacial reconstruction rests with the replacement of the defected bone by autogeneous bone grafts. To achieve sufficient incorporation of the autograft into the host bone, precise planning and simulation of the surgical intervention is required. The major problem is to determine as accurately as possible the donor site where the graft should be dissected from and to define the shape of the desired transplant. A computer-aided method for semi-automatic selection of optimal donor sites for autografts in craniofacial reconstructive surgery has been developed. The non-automatic step of graft design and constraint setting is followed by a fully automatic procedure to find the best fitting position. In extension to preceding work, a new optimization approach based on the Levenberg-Marquardt method has been implemented and embedded into our computer-based surgical planning system. This new technique enables, once the pre-processing step has been performed, selection of the optimal donor site in time less than one minute. The method has been applied during surgery planning step in more than 20 cases. The postoperative observations have shown that functional results, such as speech and chewing ability as well as restoration of bony continuity were clearly better compared to conventionally planned operations. Moreover, in most cases the duration of the surgical interventions has been distinctly reduced.

  17. Multi-modal automatic montaging of adaptive optics retinal images

    PubMed Central

    Chen, Min; Cooper, Robert F.; Han, Grace K.; Gee, James; Brainard, David H.; Morgan, Jessica I. W.

    2016-01-01

    We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download. PMID:28018714

  18. Computer assisted detection and analysis of tall cell variant papillary thyroid carcinoma in histological images

    NASA Astrophysics Data System (ADS)

    Kim, Edward; Baloch, Zubair; Kim, Caroline

    2015-03-01

    The number of new cases of thyroid cancer are dramatically increasing as incidences of this cancer have more than doubled since the early 1970s. Tall cell variant (TCV-PTC) papillary thyroid carcinoma is one type of thyroid cancer that is more aggressive and usually associated with higher local recurrence and distant metastasis. This variant can be identified through visual characteristics of cells in histological images. Thus, we created a fully automatic algorithm that is able to segment cells using a multi-stage approach. Our method learns the statistical characteristics of nuclei and cells during the segmentation process and utilizes this information for a more accurate result. Furthermore, we are able to analyze the detected regions and extract characteristic cell data that can be used to assist in clinical diagnosis.

  19. Negative Life Events and Antenatal Depression among Pregnant Women in Rural China: The Role of Negative Automatic Thoughts.

    PubMed

    Wang, Yang; Wang, Xiaohua; Liu, Fangnan; Jiang, Xiaoning; Xiao, Yun; Dong, Xuehan; Kong, Xianglei; Yang, Xuemei; Tian, Donghua; Qu, Zhiyong

    2016-01-01

    Few studies have looked at the relationship between psychological and the mental health status of pregnant women in rural China. The current study aims to explore the potential mediating effect of negative automatic thoughts between negative life events and antenatal depression. Data were collected in June 2012 and October 2012. 495 rural pregnant women were interviewed. Depressive symptoms were measured by the Edinburgh postnatal depression scale, stresses of pregnancy were measured by the pregnancy pressure scale, negative automatic thoughts were measured by the automatic thoughts questionnaire, and negative life events were measured by the life events scale for pregnant women. We used logistic regression and path analysis to test the mediating effect. The prevalence of antenatal depression was 13.7%. In the logistic regression, the only socio-demographic and health behavior factor significantly related to antenatal depression was sleep quality. Negative life events were not associated with depression in the fully adjusted model. Path analysis showed that the eventual direct and general effects of negative automatic thoughts were 0.39 and 0.51, which were larger than the effects of negative life events. This study suggested that there was a potentially significant mediating effect of negative automatic thoughts. Pregnant women who had lower scores of negative automatic thoughts were more likely to suffer less from negative life events which might lead to antenatal depression.

  20. A quality score for coronary artery tree extraction results

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2018-02-01

    Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.

  1. The automaticity of face perception is influenced by familiarity.

    PubMed

    Yan, Xiaoqian; Young, Andrew W; Andrews, Timothy J

    2017-10-01

    In this study, we explore the automaticity of encoding for different facial characteristics and ask whether it is influenced by face familiarity. We used a matching task in which participants had to report whether the gender, identity, race, or expression of two briefly presented faces was the same or different. The task was made challenging by allowing nonrelevant dimensions to vary across trials. To test for automaticity, we compared performance on trials in which the task instruction was given at the beginning of the trial, with trials in which the task instruction was given at the end of the trial. As a strong criterion for automatic processing, we reasoned that if perception of a given characteristic (gender, race, identity, or emotion) is fully automatic, the timing of the instruction should not influence performance. We compared automaticity for the perception of familiar and unfamiliar faces. Performance with unfamiliar faces was higher for all tasks when the instruction was given at the beginning of the trial. However, we found a significant interaction between instruction and task with familiar faces. Accuracy of gender and identity judgments to familiar faces was the same regardless of whether the instruction was given before or after the trial, suggesting automatic processing of these properties. In contrast, there was an effect of instruction for judgments of expression and race to familiar faces. These results show that familiarity enhances the automatic processing of some types of facial information more than others.

  2. Fully automatic characterization and data collection from crystals of biological macromolecules.

    PubMed

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier; Bowler, Matthew W

    2015-08-01

    Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.

  3. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    NASA Astrophysics Data System (ADS)

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-03-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.

  4. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry.

    PubMed

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-03-22

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.

  5. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance.

    PubMed

    Yuan, Yading; Chao, Ming; Lo, Yeh-Chi

    2017-09-01

    Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.

  6. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    PubMed Central

    Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  7. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    PubMed Central

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-01-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83–0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments. PMID:27001047

  8. Normalizing biomedical terms by minimizing ambiguity and variability

    PubMed Central

    Tsuruoka, Yoshimasa; McNaught, John; Ananiadou, Sophia

    2008-01-01

    Background One of the difficulties in mapping biomedical named entities, e.g. genes, proteins, chemicals and diseases, to their concept identifiers stems from the potential variability of the terms. Soft string matching is a possible solution to the problem, but its inherent heavy computational cost discourages its use when the dictionaries are large or when real time processing is required. A less computationally demanding approach is to normalize the terms by using heuristic rules, which enables us to look up a dictionary in a constant time regardless of its size. The development of good heuristic rules, however, requires extensive knowledge of the terminology in question and thus is the bottleneck of the normalization approach. Results We present a novel framework for discovering a list of normalization rules from a dictionary in a fully automated manner. The rules are discovered in such a way that they minimize the ambiguity and variability of the terms in the dictionary. We evaluated our algorithm using two large dictionaries: a human gene/protein name dictionary built from BioThesaurus and a disease name dictionary built from UMLS. Conclusions The experimental results showed that automatically discovered rules can perform comparably to carefully crafted heuristic rules in term mapping tasks, and the computational overhead of rule application is small enough that a very fast implementation is possible. This work will help improve the performance of term-concept mapping tasks in biomedical information extraction especially when good normalization heuristics for the target terminology are not fully known. PMID:18426547

  9. Automatic Generation of Tests from Domain and Multimedia Ontologies

    ERIC Educational Resources Information Center

    Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos

    2011-01-01

    The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…

  10. The TREC Interactive Track: An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Over, Paul

    2001-01-01

    Discussion of the study of interactive information retrieval (IR) at the Text Retrieval Conferences (TREC) focuses on summaries of the Interactive Track at each conference. Describes evolution of the track, which has changed from comparing human-machine systems with fully automatic systems to comparing interactive systems that focus on the search…

  11. 46 CFR 112.39-1 - General.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Battery Operated Lanterns § 112.39-1 General. (a) Each battery-operated, relay-controlled lantern used in accordance with Table 112.05-5(a) must: (1) Have rechargeable batteries; (2) Have an automatic battery charger that maintains the battery in a fully charged condition; and (3) Not be readily portable. [CGD 74...

  12. 75 FR 35447 - Buy American Exception Under the American Recovery and Reinvestment Act of 2009; Nationwide...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-22

    ... Reinvestment and Recovery Act of 2009 (Recovery Act) to EERE-funded projects for non-residential programmable...[hyphen]residential programmable thermostats; commercial scale fully-automatic wood pellet boiler systems...) Programmable Thermostats--Includes devices that permit adjustment of heating or air-conditioning operations...

  13. 46 CFR 112.39-1 - General.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Battery Operated Lanterns § 112.39-1 General. (a) Each battery-operated, relay-controlled lantern used in accordance with Table 112.05-5(a) must: (1) Have rechargeable batteries; (2) Have an automatic battery charger that maintains the battery in a fully charged condition; and (3) Not be readily portable. [CGD 74...

  14. 46 CFR 112.39-1 - General.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Battery Operated Lanterns § 112.39-1 General. (a) Each battery-operated, relay-controlled lantern used in accordance with Table 112.05-5(a) must: (1) Have rechargeable batteries; (2) Have an automatic battery charger that maintains the battery in a fully charged condition; and (3) Not be readily portable. [CGD 74...

  15. 46 CFR 112.39-1 - General.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Battery Operated Lanterns § 112.39-1 General. (a) Each battery-operated, relay-controlled lantern used in accordance with Table 112.05-5(a) must: (1) Have rechargeable batteries; (2) Have an automatic battery charger that maintains the battery in a fully charged condition; and (3) Not be readily portable. [CGD 74...

  16. 46 CFR 112.39-1 - General.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Battery Operated Lanterns § 112.39-1 General. (a) Each battery-operated, relay-controlled lantern used in accordance with Table 112.05-5(a) must: (1) Have rechargeable batteries; (2) Have an automatic battery charger that maintains the battery in a fully charged condition; and (3) Not be readily portable. [CGD 74...

  17. Automatic co-segmentation of lung tumor based on random forest in PET-CT images

    NASA Astrophysics Data System (ADS)

    Jiang, Xueqing; Xiang, Dehui; Zhang, Bin; Zhu, Weifang; Shi, Fei; Chen, Xinjian

    2016-03-01

    In this paper, a fully automatic method is proposed to segment the lung tumor in clinical 3D PET-CT images. The proposed method effectively combines PET and CT information to make full use of the high contrast of PET images and superior spatial resolution of CT images. Our approach consists of three main parts: (1) initial segmentation, in which spines are removed in CT images and initial connected regions achieved by thresholding based segmentation in PET images; (2) coarse segmentation, in which monotonic downhill function is applied to rule out structures which have similar standardized uptake values (SUV) to the lung tumor but do not satisfy a monotonic property in PET images; (3) fine segmentation, random forests method is applied to accurately segment the lung tumor by extracting effective features from PET and CT images simultaneously. We validated our algorithm on a dataset which consists of 24 3D PET-CT images from different patients with non-small cell lung cancer (NSCLC). The average TPVF, FPVF and accuracy rate (ACC) were 83.65%, 0.05% and 99.93%, respectively. The correlation analysis shows our segmented lung tumor volumes has strong correlation ( average 0.985) with the ground truth 1 and ground truth 2 labeled by a clinical expert.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Homer; Ashok Varikuti; Xinming Ou

    Various tools exist to analyze enterprise network systems and to produce attack graphs detailing how attackers might penetrate into the system. These attack graphs, however, are often complex and difficult to comprehend fully, and a human user may find it problematic to reach appropriate configuration decisions. This paper presents methodologies that can 1) automatically identify portions of an attack graph that do not help a user to understand the core security problems and so can be trimmed, and 2) automatically group similar attack steps as virtual nodes in a model of the network topology, to immediately increase the understandability ofmore » the data. We believe both methods are important steps toward improving visualization of attack graphs to make them more useful in configuration management for large enterprise networks. We implemented our methods using one of the existing attack-graph toolkits. Initial experimentation shows that the proposed approaches can 1) significantly reduce the complexity of attack graphs by trimming a large portion of the graph that is not needed for a user to understand the security problem, and 2) significantly increase the accessibility and understandability of the data presented in the attack graph by clearly showing, within a generated visualization of the network topology, the number and type of potential attacks to which each host is exposed.« less

  19. Automated Generation of Fault Management Artifacts from a Simple System Model

    NASA Technical Reports Server (NTRS)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  20. Automatized set-up procedure for transcranial magnetic stimulation protocols.

    PubMed

    Harquel, S; Diard, J; Raffin, E; Passera, B; Dall'Igna, G; Marendaz, C; David, O; Chauvin, A

    2017-06-01

    Transcranial Magnetic Stimulation (TMS) established itself as a powerful technique for probing and treating the human brain. Major technological evolutions, such as neuronavigation and robotized systems, have continuously increased the spatial reliability and reproducibility of TMS, by minimizing the influence of human and experimental factors. However, there is still a lack of efficient set-up procedure, which prevents the automation of TMS protocols. For example, the set-up procedure for defining the stimulation intensity specific to each subject is classically done manually by experienced practitioners, by assessing the motor cortical excitability level over the motor hotspot (HS) of a targeted muscle. This is time-consuming and introduces experimental variability. Therefore, we developed a probabilistic Bayesian model (AutoHS) that automatically identifies the HS position. Using virtual and real experiments, we compared the efficacy of the manual and automated procedures. AutoHS appeared to be more reproducible, faster, and at least as reliable as classical manual procedures. By combining AutoHS with robotized TMS and automated motor threshold estimation methods, our approach constitutes the first fully automated set-up procedure for TMS protocols. The use of this procedure decreases inter-experimenter variability while facilitating the handling of TMS protocols used for research and clinical routine. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Pilot control through the TAFCOS automatic flight control system

    NASA Technical Reports Server (NTRS)

    Wehrend, W. R., Jr.

    1979-01-01

    The set of flight control logic used in a recently completed flight test program to evaluate the total automatic flight control system (TAFCOS) with the controller operating in a fully automatic mode, was used to perform an unmanned simulation on an IBM 360 computer in which the TAFCOS concept was extended to provide a multilevel pilot interface. A pilot TAFCOS interface for direct pilot control by use of a velocity-control-wheel-steering mode was defined as well as a means for calling up conventional autopilot modes. It is concluded that the TAFCOS structure is easily adaptable to the addition of a pilot control through a stick-wheel-throttle control similar to conventional airplane controls. Conventional autopilot modes, such as airspeed-hold, altitude-hold, heading-hold, and flight path angle-hold, can also be included.

  2. A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.

    PubMed

    Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang

    2009-01-01

    This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.

  3. Automation methodologies and large-scale validation for G W : Towards high-throughput G W calculations

    NASA Astrophysics Data System (ADS)

    van Setten, M. J.; Giantomassi, M.; Gonze, X.; Rignanese, G.-M.; Hautier, G.

    2017-10-01

    The search for new materials based on computational screening relies on methods that accurately predict, in an automatic manner, total energy, atomic-scale geometries, and other fundamental characteristics of materials. Many technologically important material properties directly stem from the electronic structure of a material, but the usual workhorse for total energies, namely density-functional theory, is plagued by fundamental shortcomings and errors from approximate exchange-correlation functionals in its prediction of the electronic structure. At variance, the G W method is currently the state-of-the-art ab initio approach for accurate electronic structure. It is mostly used to perturbatively correct density-functional theory results, but is, however, computationally demanding and also requires expert knowledge to give accurate results. Accordingly, it is not presently used in high-throughput screening: fully automatized algorithms for setting up the calculations and determining convergence are lacking. In this paper, we develop such a method and, as a first application, use it to validate the accuracy of G0W0 using the PBE starting point and the Godby-Needs plasmon-pole model (G0W0GN @PBE) on a set of about 80 solids. The results of the automatic convergence study utilized provide valuable insights. Indeed, we find correlations between computational parameters that can be used to further improve the automatization of G W calculations. Moreover, we find that G0W0GN @PBE shows a correlation between the PBE and the G0W0GN @PBE gaps that is much stronger than that between G W and experimental gaps. However, the G0W0GN @PBE gaps still describe the experimental gaps more accurately than a linear model based on the PBE gaps. With this paper, we hence show that G W can be made automatic and is more accurate than using an empirical correction of the PBE gap, but that, for accurate predictive results for a broad class of materials, an improved starting point or some type of self-consistency is necessary.

  4. An automatic quantification system for MS lesions with integrated DICOM structured reporting (DICOM-SR) for implementation within a clinical environment

    NASA Astrophysics Data System (ADS)

    Jacobs, Colin; Ma, Kevin; Moin, Paymann; Liu, Brent

    2010-03-01

    Multiple Sclerosis (MS) is a common neurological disease affecting the central nervous system characterized by pathologic changes including demyelination and axonal injury. MR imaging has become the most important tool to evaluate the disease progression of MS which is characterized by the occurrence of white matter lesions. Currently, radiologists evaluate and assess the multiple sclerosis lesions manually by estimating the lesion volume and amount of lesions. This process is extremely time-consuming and sensitive to intra- and inter-observer variability. Therefore, there is a need for automatic segmentation of the MS lesions followed by lesion quantification. We have developed a fully automatic segmentation algorithm to identify the MS lesions. The segmentation algorithm is accelerated by parallel computing using Graphics Processing Units (GPU) for practical implementation into a clinical environment. Subsequently, characterized quantification of the lesions is performed. The quantification results, which include lesion volume and amount of lesions, are stored in a structured report together with the lesion location in the brain to establish a standardized representation of the disease progression of the patient. The development of this structured report in collaboration with radiologists aims to facilitate outcome analysis and treatment assessment of the disease and will be standardized based on DICOM-SR. The results can be distributed to other DICOM-compliant clinical systems that support DICOM-SR such as PACS. In addition, the implementation of a fully automatic segmentation and quantification system together with a method for storing, distributing, and visualizing key imaging and informatics data in DICOM-SR for MS lesions improves the clinical workflow of radiologists and visualizations of the lesion segmentations and will provide 3-D insight into the distribution of lesions in the brain.

  5. Linked independent component analysis for multimodal data fusion.

    PubMed

    Groves, Adrian R; Beckmann, Christian F; Smith, Steve M; Woolrich, Mark W

    2011-02-01

    In recent years, neuroimaging studies have increasingly been acquiring multiple modalities of data and searching for task- or disease-related changes in each modality separately. A major challenge in analysis is to find systematic approaches for fusing these differing data types together to automatically find patterns of related changes across multiple modalities, when they exist. Independent Component Analysis (ICA) is a popular unsupervised learning method that can be used to find the modes of variation in neuroimaging data across a group of subjects. When multimodal data is acquired for the subjects, ICA is typically performed separately on each modality, leading to incompatible decompositions across modalities. Using a modular Bayesian framework, we develop a novel "Linked ICA" model for simultaneously modelling and discovering common features across multiple modalities, which can potentially have completely different units, signal- and contrast-to-noise ratios, voxel counts, spatial smoothnesses and intensity distributions. Furthermore, this general model can be configured to allow tensor ICA or spatially-concatenated ICA decompositions, or a combination of both at the same time. Linked ICA automatically determines the optimal weighting of each modality, and also can detect single-modality structured components when present. This is a fully probabilistic approach, implemented using Variational Bayes. We evaluate the method on simulated multimodal data sets, as well as on a real data set of Alzheimer's patients and age-matched controls that combines two very different types of structural MRI data: morphological data (grey matter density) and diffusion data (fractional anisotropy, mean diffusivity, and tensor mode). Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Automatic motor task selection via a bandit algorithm for a brain-controlled button

    NASA Astrophysics Data System (ADS)

    Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen

    2013-02-01

    Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.

  7. Automated regional analysis of B-mode ultrasound images of skeletal muscle movement

    PubMed Central

    Darby, John; Costen, Nicholas; Loram, Ian D.

    2012-01-01

    To understand the functional significance of skeletal muscle anatomy, a method of quantifying local shape changes in different tissue structures during dynamic tasks is required. Taking advantage of the good spatial and temporal resolution of B-mode ultrasound imaging, we describe a method of automatically segmenting images into fascicle and aponeurosis regions and tracking movement of features, independently, in localized portions of each tissue. Ultrasound images (25 Hz) of the medial gastrocnemius muscle were collected from eight participants during ankle joint rotation (2° and 20°), isometric contractions (1, 5, and 50 Nm), and deep knee bends. A Kanade-Lucas-Tomasi feature tracker was used to identify and track any distinctive and persistent features within the image sequences. A velocity field representation of local movement was then found and subdivided between fascicle and aponeurosis regions using segmentations from a multiresolution active shape model (ASM). Movement in each region was quantified by interpolating the effect of the fields on a set of probes. ASM segmentation results were compared with hand-labeled data, while aponeurosis and fascicle movement were compared with results from a previously documented cross-correlation approach. ASM provided good image segmentations (<1 mm average error), with fully automatic initialization possible in sequences from seven participants. Feature tracking provided similar length change results to the cross-correlation approach for small movements, while outperforming it in larger movements. The proposed method provides the potential to distinguish between active and passive changes in muscle shape and model strain distributions during different movements/conditions and quantify nonhomogeneous strain along aponeuroses. PMID:22033532

  8. Automatic thermographic image defect detection of composites

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Liebenberg, Bjorn; Raymont, Jeff; Santospirito, SP

    2011-05-01

    Detecting defects, and especially reliably measuring defect sizes, are critical objectives in automatic NDT defect detection applications. In this work, the Sentence software is proposed for the analysis of pulsed thermography and near IR images of composite materials. Furthermore, the Sentence software delivers an end-to-end, user friendly platform for engineers to perform complete manual inspections, as well as tools that allow senior engineers to develop inspection templates and profiles, reducing the requisite thermographic skill level of the operating engineer. Finally, the Sentence software can also offer complete independence of operator decisions by the fully automated "Beep on Defect" detection functionality. The end-to-end automatic inspection system includes sub-systems for defining a panel profile, generating an inspection plan, controlling a robot-arm and capturing thermographic images to detect defects. A statistical model has been built to analyze the entire image, evaluate grey-scale ranges, import sentencing criteria and automatically detect impact damage defects. A full width half maximum algorithm has been used to quantify the flaw sizes. The identified defects are imported into the sentencing engine which then sentences (automatically compares analysis results against acceptance criteria) the inspection by comparing the most significant defect or group of defects against the inspection standards.

  9. A Bayesian state-space approach for damage detection and classification

    NASA Astrophysics Data System (ADS)

    Dzunic, Zoran; Chen, Justin G.; Mobahi, Hossein; Büyüköztürk, Oral; Fisher, John W.

    2017-11-01

    The problem of automatic damage detection in civil structures is complex and requires a system that can interpret collected sensor data into meaningful information. We apply our recently developed switching Bayesian model for dependency analysis to the problems of damage detection and classification. The model relies on a state-space approach that accounts for noisy measurement processes and missing data, which also infers the statistical temporal dependency between measurement locations signifying the potential flow of information within the structure. A Gibbs sampling algorithm is used to simultaneously infer the latent states, parameters of the state dynamics, the dependence graph, and any changes in behavior. By employing a fully Bayesian approach, we are able to characterize uncertainty in these variables via their posterior distribution and provide probabilistic estimates of the occurrence of damage or a specific damage scenario. We also implement a single class classification method which is more realistic for most real world situations where training data for a damaged structure is not available. We demonstrate the methodology with experimental test data from a laboratory model structure and accelerometer data from a real world structure during different environmental and excitation conditions.

  10. Realizing parameterless automatic classification of remote sensing imagery using ontology engineering and cyberinfrastructure techniques

    NASA Astrophysics Data System (ADS)

    Sun, Ziheng; Fang, Hui; Di, Liping; Yue, Peng

    2016-09-01

    It was an untouchable dream for remote sensing experts to realize total automatic image classification without inputting any parameter values. Experts usually spend hours and hours on tuning the input parameters of classification algorithms in order to obtain the best results. With the rapid development of knowledge engineering and cyberinfrastructure, a lot of data processing and knowledge reasoning capabilities become online accessible, shareable and interoperable. Based on these recent improvements, this paper presents an idea of parameterless automatic classification which only requires an image and automatically outputs a labeled vector. No parameters and operations are needed from endpoint consumers. An approach is proposed to realize the idea. It adopts an ontology database to store the experiences of tuning values for classifiers. A sample database is used to record training samples of image segments. Geoprocessing Web services are used as functionality blocks to finish basic classification steps. Workflow technology is involved to turn the overall image classification into a total automatic process. A Web-based prototypical system named PACS (Parameterless Automatic Classification System) is implemented. A number of images are fed into the system for evaluation purposes. The results show that the approach could automatically classify remote sensing images and have a fairly good average accuracy. It is indicated that the classified results will be more accurate if the two databases have higher quality. Once the experiences and samples in the databases are accumulated as many as an expert has, the approach should be able to get the results with similar quality to that a human expert can get. Since the approach is total automatic and parameterless, it can not only relieve remote sensing workers from the heavy and time-consuming parameter tuning work, but also significantly shorten the waiting time for consumers and facilitate them to engage in image classification activities. Currently, the approach is used only on high resolution optical three-band remote sensing imagery. The feasibility using the approach on other kinds of remote sensing images or involving additional bands in classification will be studied in future.

  11. Incorporating Learning Characteristics into Automatic Essay Scoring Models: What Individual Differences and Linguistic Features Tell Us about Writing Quality

    ERIC Educational Resources Information Center

    Crossley, Scott A.; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S.

    2016-01-01

    This study investigates a novel approach to automatically assessing essay quality that combines natural language processing approaches that assess text features with approaches that assess individual differences in writers such as demographic information, standardized test scores, and survey results. The results demonstrate that combining text…

  12. 14 CFR 170.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (i.e., ATCT) divided by the discounted life cycle costs. Ceiling means the vertical distance between... landing system (ILS) means an instrument landing system whereby the pilot guides his approach to a runway... ground can be fed into the automatic pilot for automatically controlled approaches. Instrument...

  13. Automatically Preparing Safe SQL Queries

    NASA Astrophysics Data System (ADS)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  14. Gait analysis--precise, rapid, automatic, 3-D position and orientation kinematics and dynamics.

    PubMed

    Mann, R W; Antonsson, E K

    1983-01-01

    A fully automatic optoelectronic photogrammetric technique is presented for measuring the spatial kinematics of human motion (both position and orientation) and estimating the inertial (net) dynamics. Calibration and verification showed that in a two-meter cube viewing volume, the system achieves one millimeter of accuracy and resolution in translation and 20 milliradians in rotation. Since double differentiation of generalized position data to determine accelerations amplifies noise, the frequency domain characteristics of the system were investigated. It was found that the noise and all other errors in the kinematic data contribute less than five percent error to the resulting dynamics.

  15. Online fully automated three-dimensional surface reconstruction of unknown objects

    NASA Astrophysics Data System (ADS)

    Khalfaoui, Souhaiel; Aigueperse, Antoine; Fougerolle, Yohan; Seulin, Ralph; Fofi, David

    2015-04-01

    This paper presents a novel scheme for automatic and intelligent 3D digitization using robotic cells. The advantage of our procedure is that it is generic since it is not performed for a specific scanning technology. Moreover, it is not dependent on the methods used to perform the tasks associated with each elementary process. The comparison of results between manual and automatic scanning of complex objects shows that our digitization strategy is very efficient and faster than trained experts. The 3D models of the different objects are obtained with a strongly reduced number of acquisitions while moving efficiently the ranging device.

  16. MICRA: an automatic pipeline for fast characterization of microbial genomes from high-throughput sequencing data.

    PubMed

    Caboche, Ségolène; Even, Gaël; Loywick, Alexandre; Audebert, Christophe; Hot, David

    2017-12-19

    The increase in available sequence data has advanced the field of microbiology; however, making sense of these data without bioinformatics skills is still problematic. We describe MICRA, an automatic pipeline, available as a web interface, for microbial identification and characterization through reads analysis. MICRA uses iterative mapping against reference genomes to identify genes and variations. Additional modules allow prediction of antibiotic susceptibility and resistance and comparing the results of several samples. MICRA is fast, producing few false-positive annotations and variant calls compared to current methods, making it a tool of great interest for fully exploiting sequencing data.

  17. A simulation evaluation of a pilot interface with an automatic terminal approach system

    NASA Technical Reports Server (NTRS)

    Hinton, David A.

    1987-01-01

    The pilot-machine interface with cockpit automation is a critical factor in achieving the benefits of automation and reducing pilot blunders. To improve this interface, an automatic terminal approach system (ATAS) was conceived that can automatically fly a published instrument approach by using stored instrument approach data to automatically tune airplane radios and control an airplane autopilot and autothrottle. The emphasis in the ATAS concept is a reduction in pilot blunders and work load by improving the pilot-automation interface. A research prototype of an ATAS was developed and installed in the Langley General Aviation Simulator. A piloted simulation study of the ATAS concept showed fewer pilot blunders, but no significant change in work load, when compared with a baseline heading-select autopilot mode. With the baseline autopilot, pilot blunders tended to involve loss of navigational situational awareness or instrument misinterpretation. With the ATAS, pilot blunders tended to involve a lack of awareness of the current ATAS mode state or deficiencies in the pilots' mental model of how the system operated. The ATAS display provided adequate approach status data to maintain situational awareness.

  18. Emotion and sex of facial stimuli modulate conditional automaticity in behavioral and neuronal interference in healthy men.

    PubMed

    Kohn, Nils; Fernández, Guillén

    2017-12-06

    Our surrounding provides a host of sensory input, which we cannot fully process without streamlining and automatic processing. Levels of automaticity differ for different cognitive and affective processes. Situational and contextual interactions between cognitive and affective processes in turn influence the level of automaticity. Automaticity can be measured by interference in Stroop tasks. We applied an emotional version of the Stroop task to investigate how stress as a contextual factor influences the affective valence-dependent level of automaticity. 120 young, healthy men were investigated for behavioral and brain interference following a stress induction or control procedure in a counter-balanced cross-over-design. Although Stroop interference was always observed, sex and emotion of the face strongly modulated interference, which was larger for fearful and male faces. These effects suggest higher automaticity when processing happy and also female faces. Supporting behavioral patterns, brain data show lower interference related brain activity in executive control related regions in response to happy and female faces. In the absence of behavioral stress effects, congruent compared to incongruent trials (reverse interference) showed little to no deactivation under stress in response to happy female and fearful male trials. These congruency effects are potentially based on altered context- stress-related facial processing that interact with sex-emotion stereotypes. Results indicate that sex and facial emotion modulate Stroop interference in brain and behavior. These effects can be explained by altered response difficulty as a consequence of the contextual and stereotype related modulation of automaticity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. The personal aircraft: Status and issues

    NASA Technical Reports Server (NTRS)

    Anders, Scott G.; Asbury, Scott C.; Brentner, Kenneth S.; Bushnell, Dennis M.; Glass, Christopher E.; Hodges, William T.; Morris, Shelby J., Jr.; Scott, Michael A.

    1994-01-01

    Paper summarizes the status of personal air transportation with emphasis upon VTOL and converticar capability. The former obviates the need for airport operations for personal aircraft whereas the latter provides both ground and air capability in the same vehicle. Fully automatic operation, ATC and navigation is stressed along with consideration of acoustic, environmental and cost issues.

  20. Automation of a laboratory particleboard press

    Treesearch

    Robert L. Geimer; Gordon H. Stevens; Richard E. Kinney

    1982-01-01

    A manually operated particleboard press was converted to a fully automatic, programable system with updated data collection capabilities. Improved control has permitted observations of very small changes in pressing variables resulting in the development of a technique capable of reducing press times by 70 percent. Accurate control of the press is obtained through an...

  1. Avatars, Virtual Reality Technology, and the U.S. Military: Emerging Policy Issues

    DTIC Science & Technology

    2008-04-09

    called “ Sentient Worldwide Simulation,” which will “mirror” real life and automatically follow real-world events in real time. Some virtual world...cities, with the final goal of creating a fully functioning virtual model of the entire world, which will be known as the Sentient Worldwide Simulation

  2. Development of German-English Machine Translation System. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winfred P.; Stachowitz, Rolf A.

    This report describes work on a pilot system for a fully automatic, high-quality translation of German scientific and technical text into English and gives the results of an experiment designed to show the system's capability to produce quality mechanical translation. The areas considered were: (1) grammar formalism, mainly involving the addition…

  3. Buried in the Warm, Warm Ground

    ERIC Educational Resources Information Center

    Ellis-Tipton, John

    2006-01-01

    Buntingsdale Infant School in Shropshire has installed an environmentally friendly heating system. The school's heating system is called a Ground Source Heat Pump (GSHP). Buntingsdale, a three-classroom infant school in a wooden demountable building, is one of the first schools in Britain to use this system. The system is fully automatic: it is…

  4. 76 FR 63635 - Extension of the Designation of Sudan for Temporary Protected Status and Automatic Extension of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-13

    ... Front. A number of issues have not been fully addressed, however, including growing poverty, economic... intercommunal violence caused civilian deaths, continued displacement of the population, and general instability... environmental and economic factors, have created one of the worst humanitarian crises in the world. Despite...

  5. 44 CFR 60.3 - Flood plain management criteria for flood-prone areas.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... improvements, that fully enclosed areas below the lowest floor that are usable solely for parking of vehicles... that they permit the automatic entry and exit of floodwaters. (6) Require that manufactured homes that... building standards. Such enclosed space shall be useable solely for parking of vehicles, building access...

  6. 44 CFR 60.3 - Flood plain management criteria for flood-prone areas.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... improvements, that fully enclosed areas below the lowest floor that are usable solely for parking of vehicles... that they permit the automatic entry and exit of floodwaters. (6) Require that manufactured homes that... building standards. Such enclosed space shall be useable solely for parking of vehicles, building access...

  7. 30 CFR 75.1909 - Nonpermissible diesel-powered equipment; design and performance requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... rail-mounted equipment, must be provided with a parking brake that holds the fully loaded equipment... work platforms must be provided with a means to ensure that the parking braking system is released... requirements of § 75.1908(a) must be provided with an automatic fire suppression system meeting the...

  8. 30 CFR 75.1909 - Nonpermissible diesel-powered equipment; design and performance requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... rail-mounted equipment, must be provided with a parking brake that holds the fully loaded equipment... work platforms must be provided with a means to ensure that the parking braking system is released... requirements of § 75.1908(a) must be provided with an automatic fire suppression system meeting the...

  9. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images.

    PubMed

    Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-05-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.

  10. Automatic cerebrospinal fluid segmentation in non-contrast CT images using a 3D convolutional network

    NASA Astrophysics Data System (ADS)

    Patel, Ajay; van de Leemput, Sil C.; Prokop, Mathias; van Ginneken, Bram; Manniesing, Rashindra

    2017-03-01

    Segmentation of anatomical structures is fundamental in the development of computer aided diagnosis systems for cerebral pathologies. Manual annotations are laborious, time consuming and subject to human error and observer variability. Accurate quantification of cerebrospinal fluid (CSF) can be employed as a morphometric measure for diagnosis and patient outcome prediction. However, segmenting CSF in non-contrast CT images is complicated by low soft tissue contrast and image noise. In this paper we propose a state-of-the-art method using a multi-scale three-dimensional (3D) fully convolutional neural network (CNN) to automatically segment all CSF within the cranial cavity. The method is trained on a small dataset comprised of four manually annotated cerebral CT images. Quantitative evaluation of a separate test dataset of four images shows a mean Dice similarity coefficient of 0.87 +/- 0.01 and mean absolute volume difference of 4.77 +/- 2.70 %. The average prediction time was 68 seconds. Our method allows for fast and fully automated 3D segmentation of cerebral CSF in non-contrast CT, and shows promising results despite a limited amount of training data.

  11. Automatic approach to deriving fuzzy slope positions

    NASA Astrophysics Data System (ADS)

    Zhu, Liang-Jun; Zhu, A.-Xing; Qin, Cheng-Zhi; Liu, Jun-Zhi

    2018-03-01

    Fuzzy characterization of slope positions is important for geographic modeling. Most of the existing fuzzy classification-based methods for fuzzy characterization require extensive user intervention in data preparation and parameter setting, which is tedious and time-consuming. This paper presents an automatic approach to overcoming these limitations in the prototype-based inference method for deriving fuzzy membership value (or similarity) to slope positions. The key contribution is a procedure for finding the typical locations and setting the fuzzy inference parameters for each slope position type. Instead of being determined totally by users in the prototype-based inference method, in the proposed approach the typical locations and fuzzy inference parameters for each slope position type are automatically determined by a rule set based on prior domain knowledge and the frequency distributions of topographic attributes. Furthermore, the preparation of topographic attributes (e.g., slope gradient, curvature, and relative position index) is automated, so the proposed automatic approach has only one necessary input, i.e., the gridded digital elevation model of the study area. All compute-intensive algorithms in the proposed approach were speeded up by parallel computing. Two study cases were provided to demonstrate that this approach can properly, conveniently and quickly derive the fuzzy slope positions.

  12. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  13. Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features

    NASA Astrophysics Data System (ADS)

    Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate

    2017-08-01

    Objective. Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. Approach. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. Main results. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Significance. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.

  14. The Automatic Conservative: Ideology-Based Attentional Asymmetries in the Processing of Valenced Information

    PubMed Central

    Carraro, Luciana; Castelli, Luigi; Macchiella, Claudia

    2011-01-01

    Research has widely explored the differences between conservatives and liberals, and it has been also recently demonstrated that conservatives display different reactions toward valenced stimuli. However, previous studies have not yet fully illuminated the cognitive underpinnings of these differences. In the current work, we argued that political ideology is related to selective attention processes, so that negative stimuli are more likely to automatically grab the attention of conservatives as compared to liberals. In Experiment 1, we demonstrated that negative (vs. positive) information impaired the performance of conservatives, more than liberals, in an Emotional Stroop Task. This finding was confirmed in Experiment 2 and in Experiment 3 employing a Dot-Probe Task, demonstrating that threatening stimuli were more likely to attract the attention of conservatives. Overall, results support the conclusion that people embracing conservative views of the world display an automatic selective attention for negative stimuli. PMID:22096486

  15. Automated feature detection and identification in digital point-ordered signals

    DOEpatents

    Oppenlander, Jane E.; Loomis, Kent C.; Brudnoy, David M.; Levy, Arthur J.

    1998-01-01

    A computer-based automated method to detect and identify features in digital point-ordered signals. The method is used for processing of non-destructive test signals, such as eddy current signals obtained from calibration standards. The signals are first automatically processed to remove noise and to determine a baseline. Next, features are detected in the signals using mathematical morphology filters. Finally, verification of the features is made using an expert system of pattern recognition methods and geometric criteria. The method has the advantage that standard features can be, located without prior knowledge of the number or sequence of the features. Further advantages are that standard features can be differentiated from irrelevant signal features such as noise, and detected features are automatically verified by parameters extracted from the signals. The method proceeds fully automatically without initial operator set-up and without subjective operator feature judgement.

  16. Optimization and automation of quantitative NMR data extraction.

    PubMed

    Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos

    2013-06-18

    NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.

  17. An automatic rat brain extraction method based on a deformable surface model.

    PubMed

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Real-time control of focused ultrasound heating based on rapid MR thermometry.

    PubMed

    Vimeux, F C; De Zwart, J A; Palussiére, J; Fawaz, R; Delalande, C; Canioni, P; Grenier, N; Moonen, C T

    1999-03-01

    Real-time control of the heating procedure is essential for hyperthermia applications of focused ultrasound (FUS). The objective of this study is to demonstrate the feasibility of MRI-controlled FUS. An automatic control system was developed using a dedicated interface between the MR system control computer and the FUS wave generator. Two algorithms were used to regulate FUS power to maintain the focal point temperature at a desired level. Automatic control of FUS power level was demonstrated ex vivo at three target temperature levels (increase of 5 degrees C, 10 degrees C, and 30 degrees C above room temperature) during 30-minute hyperthermic periods. Preliminary in vivo results on rat leg muscle confirm that necrosis estimate, calculated on-line during FUS sonication, allows prediction of tissue damage. CONCLUSIONS. The feasibility of fully automatic FUS control based on MRI thermometry has been demonstrated.

  19. A two-dimensional air-to-air combat game - Toward an air-combat advisory system

    NASA Technical Reports Server (NTRS)

    Neuman, Frank

    1987-01-01

    Air-to-air combat is modeled as a discrete differential game, and by constraining the game to searching for the best guidance laws from the sets of those considered for each opponent, feedback and outcome charts are obtained which can be used to turn one of the automatic opponents into an intelligent opponent against a human pilot. A one-on-one two-dimensional fully automatic, or manned versus automatic, air-to-air combat game has been designed which includes both attack and evasion alternatives for both aircraft. Guidance law selection occurs by flooding the initial-condition space with four simulated fights for each initial condition, depicting the various attack/evasion strategies for the two opponents, and recording the outcomes. For each initial condition, the minimax method from differential games is employed to determine the best choice from the available strategies.

  20. Automatization of hydrodynamic modelling in a Floreon+ system

    NASA Astrophysics Data System (ADS)

    Ronovsky, Ales; Kuchar, Stepan; Podhoranyi, Michal; Vojtek, David

    2017-07-01

    The paper describes fully automatized hydrodynamic modelling as a part of the Floreon+ system. The main purpose of hydrodynamic modelling in the disaster management is to provide an accurate overview of the hydrological situation in a given river catchment. Automatization of the process as a web service could provide us with immediate data based on extreme weather conditions, such as heavy rainfall, without the intervention of an expert. Such a service can be used by non scientific users such as fire-fighter operators or representatives of a military service organizing evacuation during floods or river dam breaks. The paper describes the whole process beginning with a definition of a schematization necessary for hydrodynamic model, gathering of necessary data and its processing for a simulation, the model itself and post processing of a result and visualization on a web service. The process is demonstrated on a real data collected during floods in our Moravian-Silesian region in 2010.

  1. PRESBYOPIA OPTOMETRY METHOD BASED ON DIOPTER REGULATION AND CHARGE COUPLE DEVICE IMAGING TECHNOLOGY.

    PubMed

    Zhao, Q; Wu, X X; Zhou, J; Wang, X; Liu, R F; Gao, J

    2015-01-01

    With the development of photoelectric technology and single-chip microcomputer technology, objective optometry, also known as automatic optometry, is becoming precise. This paper proposed a presbyopia optometry method based on diopter regulation and Charge Couple Device (CCD) imaging technology and, in the meantime, designed a light path that could measure the system. This method projects a test figure to the eye ground and then the reflected image from the eye ground is detected by CCD. The image is then automatically identified by computer and the far point and near point diopters are determined to calculate lens parameter. This is a fully automatic objective optometry method which eliminates subjective factors of the tested subject. Furthermore, it can acquire the lens parameter of presbyopia accurately and quickly and can be used to measure the lens parameter of hyperopia, myopia and astigmatism.

  2. Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.

    PubMed

    Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana

    2017-07-01

    Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.

  3. Flight investigation of manual and automatic VTOL decelerating instrument approaches and landings

    NASA Technical Reports Server (NTRS)

    Kelly, J. R.; Niessen, F. R.; Thibodeaux, J. J.; Yenni, K. R.; Garren, J. F., Jr.

    1974-01-01

    A flight investigation was undertaken to study the problems associated with manual and automatic control of steep, decelerating instrument approaches and landings under simulated instrument conditions. The study was conducted with a research helicopter equipped with a three-cue flight-director indicator. The scope of the investigation included variations in the flight-director control laws, glide-path angle, deceleration profile, and control response characteristics. Investigation of the automatic-control problem resulted in the first automated approach and landing to a predetermined spot ever accomplished with a helicopter. Although well-controlled approaches and landings could be performed manually with the flight-director concept, pilot comments indicated the need for a better display which would more effectively integrate command and situation information.

  4. University of Glasgow at TREC 2008: Experiments in Blog, Enterprise, and Relevance Feedback Tracks with Terrier

    DTIC Science & Technology

    2008-11-01

    improves our TREC 2007 dictionary -based approach by automatically building an internal opinion dictionary from the collection itself. We measure the opin...detecting opinionated documents. The first approach improves our TREC 2007 dictionary -based approach by automat- ically building an internal opinion... dictionary from the collection itself. The second approach is based on the OpinionFinder tool, which identifies subjective sentences in text. In particular

  5. A semi-automatic annotation tool for cooking video

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe

    2013-03-01

    In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

  6. Strategies for automatic processing of large aftershock sequences

    NASA Astrophysics Data System (ADS)

    Kvaerna, T.; Gibbons, S. J.

    2017-12-01

    Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.

  7. Analysis of Technique to Extract Data from the Web for Improved Performance

    NASA Astrophysics Data System (ADS)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  8. A characterization of Parkinson's disease by describing the visual field motion during gait

    NASA Astrophysics Data System (ADS)

    Trujillo, David; Martínez, Fabio; Atehortúa, Angélica; Alvarez, Charlens; Romero, Eduardo

    2015-12-01

    An early diagnosis of Parkinson's Disease (PD) is crucial towards devising successful rehabilitation programs. Typically, the PD diagnosis is performed by characterizing typical symptoms, namely bradykinesia, rigidity, tremor, postural instability or freezing gait. However, traditional examination tests are usually incapable of detecting slight motor changes, specially for early stages of the pathology. Recently, eye movement abnormalities have correlated with early onset of some neurodegenerative disorders. This work introduces a new characterization of the Parkinson disease by describing the ocular motion during a common daily activity as the gait. This paper proposes a fully automatic eye motion analysis using a dense optical flow that tracks the ocular direction. The eye motion is then summarized using orientation histograms constructed during a whole gait cycle. The proposed approach was evaluated by measuring the χ2 distance between the orientation histograms, showing substantial differences between control and PD patients.

  9. Sampling-free Bayesian inversion with adaptive hierarchical tensor representations

    NASA Astrophysics Data System (ADS)

    Eigel, Martin; Marschall, Manuel; Schneider, Reinhold

    2018-03-01

    A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.

  10. Optimized graph-based mosaicking for virtual microscopy

    NASA Astrophysics Data System (ADS)

    Steckhan, Dirk G.; Wittenberg, Thomas

    2009-02-01

    Virtual microscopy has the potential to partially replace traditional microscopy. For virtualization, the slide is scanned once by a fully automatized robotic microscope and saved digitally. Typically, such a scan results in several hundreds to thousands of fields of view. Since robotic stages have positioning errors, these fields of view have to be registered locally and globally in an additional step. In this work we propose a new global mosaicking method for the creation of virtual slides based on sub-pixel exact phase correlation for local alignment in combination with Prim's minimum spanning tree algorithm for global alignment. Our algorithm allows for a robust reproduction of the original slide even in the presence of views with little to no information content. This makes it especially suitable for the mosaicking of cervical smears. These smears often exhibit large empty areas, which do not contain enough information for common stitching approaches.

  11. Localization of the lumbar discs using machine learning and exact probabilistic inference.

    PubMed

    Oktay, Ayse Betul; Akgul, Yusuf Sinan

    2011-01-01

    We propose a novel fully automatic approach to localize the lumbar intervertebral discs in MR images with PHOG based SVM and a probabilistic graphical model. At the local level, our method assigns a score to each pixel in target image that indicates whether it is a disc center or not. At the global level, we define a chain-like graphical model that represents the lumbar intervertebral discs and we use an exact inference algorithm to localize the discs. Our main contributions are the employment of the SVM with the PHOG based descriptor which is robust against variations of the discs and a graphical model that reflects the linear nature of the vertebral column. Our inference algorithm runs in polynomial time and produces globally optimal results. The developed system is validated on a real spine MRI dataset and the final localization results are favorable compared to the results reported in the literature.

  12. Handwritten dynamics assessment through convolutional neural networks: An application to Parkinson's disease identification.

    PubMed

    Pereira, Clayton R; Pereira, Danilo R; Rosa, Gustavo H; Albuquerque, Victor H C; Weber, Silke A T; Hook, Christian; Papa, João P

    2018-05-01

    Parkinson's disease (PD) is considered a degenerative disorder that affects the motor system, which may cause tremors, micrography, and the freezing of gait. Although PD is related to the lack of dopamine, the triggering process of its development is not fully understood yet. In this work, we introduce convolutional neural networks to learn features from images produced by handwritten dynamics, which capture different information during the individual's assessment. Additionally, we make available a dataset composed of images and signal-based data to foster the research related to computer-aided PD diagnosis. The proposed approach was compared against raw data and texture-based descriptors, showing suitable results, mainly in the context of early stage detection, with results nearly to 95%. The analysis of handwritten dynamics using deep learning techniques showed to be useful for automatic Parkinson's disease identification, as well as it can outperform handcrafted features. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. SHOP: scaffold HOPping by GRID-based similarity searches.

    PubMed

    Bergmann, Rikke; Linusson, Anna; Zamora, Ismael

    2007-05-31

    A new GRID-based method for scaffold hopping (SHOP) is presented. In a fully automatic manner, scaffolds were identified in a database based on three types of 3D-descriptors. SHOP's ability to recover scaffolds was assessed and validated by searching a database spiked with fragments of known ligands of three different protein targets relevant for drug discovery using a rational approach based on statistical experimental design. Five out of eight and seven out of eight thrombin scaffolds and all seven HIV protease scaffolds were recovered within the top 10 and 31 out of 31 neuraminidase scaffolds were in the 31 top-ranked scaffolds. SHOP also identified new scaffolds with substantially different chemotypes from the queries. Docking analysis indicated that the new scaffolds would have similar binding modes to those of the respective query scaffolds observed in X-ray structures. The databases contained scaffolds from published combinatorial libraries to ensure that identified scaffolds could be feasibly synthesized.

  14. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  15. Autonomous flight and remote site landing guidance research for helicopters

    NASA Technical Reports Server (NTRS)

    Denton, R. V.; Pecklesma, N. J.; Smith, F. W.

    1987-01-01

    Automated low-altitude flight and landing in remote areas within a civilian environment are investigated, where initial cost, ongoing maintenance costs, and system productivity are important considerations. An approach has been taken which has: (1) utilized those technologies developed for military applications which are directly transferable to a civilian mission; (2) exploited and developed technology areas where new methods or concepts are required; and (3) undertaken research with the potential to lead to innovative methods or concepts required to achieve a manual and fully automatic remote area low-altitude and landing capability. The project has resulted in a definition of system operational concept that includes a sensor subsystem, a sensor fusion/feature extraction capability, and a guidance and control law concept. These subsystem concepts have been developed to sufficient depth to enable further exploration within the NASA simulation environment, and to support programs leading to the flight test.

  16. Optimized color decomposition of localized whole slide images and convolutional neural network for intermediate prostate cancer classification

    NASA Astrophysics Data System (ADS)

    Zhou, Naiyun; Gao, Yi

    2017-03-01

    This paper presents a fully automatic approach to grade intermediate prostate malignancy with hematoxylin and eosin-stained whole slide images. Deep learning architectures such as convolutional neural networks have been utilized in the domain of histopathology for automated carcinoma detection and classification. However, few work show its power in discriminating intermediate Gleason patterns, due to sporadic distribution of prostate glands on stained surgical section samples. We propose optimized hematoxylin decomposition on localized images, followed by convolutional neural network to classify Gleason patterns 3+4 and 4+3 without handcrafted features or gland segmentation. Crucial glands morphology and structural relationship of nuclei are extracted twice in different color space by the multi-scale strategy to mimic pathologists' visual examination. Our novel classification scheme evaluated on 169 whole slide images yielded a 70.41% accuracy and corresponding area under the receiver operating characteristic curve of 0.7247.

  17. Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.

    PubMed

    Schroder, Kai; Zinke, Arno; Klein, Reinhard

    2015-02-01

    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.

  18. Probabilistic graphlet transfer for photo cropping.

    PubMed

    Zhang, Luming; Song, Mingli; Zhao, Qi; Liu, Xiao; Bu, Jiajun; Chen, Chun

    2013-02-01

    As one of the most basic photo manipulation processes, photo cropping is widely used in the printing, graphic design, and photography industries. In this paper, we introduce graphlets (i.e., small connected subgraphs) to represent a photo's aesthetic features, and propose a probabilistic model to transfer aesthetic features from the training photo onto the cropped photo. In particular, by segmenting each photo into a set of regions, we construct a region adjacency graph (RAG) to represent the global aesthetic feature of each photo. Graphlets are then extracted from the RAGs, and these graphlets capture the local aesthetic features of the photos. Finally, we cast photo cropping as a candidate-searching procedure on the basis of a probabilistic model, and infer the parameters of the cropped photos using Gibbs sampling. The proposed method is fully automatic. Subjective evaluations have shown that it is preferred over a number of existing approaches.

  19. A layered modulation method for pixel matching in online phase measuring profilometry

    NASA Astrophysics Data System (ADS)

    Li, Hongru; Feng, Guoying; Bourgade, Thomas; Yang, Peng; Zhou, Shouhuan; Asundi, Anand

    2016-10-01

    An online phase measuring profilometry with new layered modulation method for pixel matching is presented. In this method and in contrast with previous modulation matching methods, the captured images are enhanced by Retinex theory for better modulation distribution, and all different layer modulation masks are fully used to determine the displacement of a rectilinear moving object. High, medium and low modulation masks are obtained by performing binary segmentation with iterative Otsu method. The final shifting pixels are calculated based on centroid concept, and after that the aligned fringe patterns can be extracted from each frame. After performing Stoilov algorithm and a series of subsequent operations, the object profile on a translation stage is reconstructed. All procedures are carried out automatically, without setting specific parameters in advance. Numerical simulations are detailed and experimental results verify the validity and feasibility of the proposed approach.

  20. Trans-dimensional MCMC methods for fully automatic motion analysis in tagged MRI.

    PubMed

    Smal, Ihor; Carranza-Herrezuelo, Noemí; Klein, Stefan; Niessen, Wiro; Meijering, Erik

    2011-01-01

    Tagged magnetic resonance imaging (tMRI) is a well-known noninvasive method allowing quantitative analysis of regional heart dynamics. Its clinical use has so far been limited, in part due to the lack of robustness and accuracy of existing tag tracking algorithms in dealing with low (and intrinsically time-varying) image quality. In this paper, we propose a novel probabilistic method for tag tracking, implemented by means of Bayesian particle filtering and a trans-dimensional Markov chain Monte Carlo (MCMC) approach, which efficiently combines information about the imaging process and tag appearance with prior knowledge about the heart dynamics obtained by means of non-rigid image registration. Experiments using synthetic image data (with ground truth) and real data (with expert manual annotation) from preclinical (small animal) and clinical (human) studies confirm that the proposed method yields higher consistency, accuracy, and intrinsic tag reliability assessment in comparison with other frequently used tag tracking methods.

  1. Manual vs. automatic capture management in implantable cardioverter defibrillators and cardiac resynchronization therapy defibrillators.

    PubMed

    Murgatroyd, Francis D; Helmling, Erhard; Lemke, Bernd; Eber, Bernd; Mewis, Christian; van der Meer-Hensgens, Judith; Chang, Yanping; Khalameizer, Vladimir; Katz, Amos

    2010-06-01

    The Secura ICD and Consulta CRT-D are the first defibrillators to have automatic right atrial (RA), right ventricular (RV), and left ventricular (LV) capture management (CM). Complete CM was evaluated in an implantable cardioverter defibrillator (ICD) population. Two prospective clinical studies were conducted in 28 centres in Europe and Israel. Automatic CM data were compared with manual threshold measurements, the CM applicability was determined, and adjustments to pacing outputs were analysed. In total, 160 patients [age 64.6 +/- 10.4 years, 77% male, 80 ICD and 80 cardiac resynchronization therapy defibrillator (CRT-D)] were included. The differences between automatic and manual measurements were 2.5 V) due to raised RA threshold in seven (4.4%), high RV threshold in nine (5.6%), and high LV threshold in three patients (3.8%). All high threshold detections and all automatic modulations of pacing output were adjudicated appropriate. Complete CM adjusts pacing output appropriately, permitting a reduction in office visits while it may maximize device longevity. The study was registered at ClinicalTrials.gov identifiers: NCT00526227 and NCT00526162.

  2. Online automatic tuning and control for fed-batch cultivation

    PubMed Central

    van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.

    2007-01-01

    Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554

  3. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier

    NASA Astrophysics Data System (ADS)

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task.

  4. Automated Volumetry and Regional Thickness Analysis of Hippocampal Subfields and Medial Temporal Cortical Structures in Mild Cognitive Impairment

    PubMed Central

    Yushkevich, Paul A.; Pluta, John B.; Wang, Hongzhi; Xie, Long; Ding, Song-Lin; Gertje, E. C.; Mancuso, Lauren; Kliot, Daria; Das, Sandhitsu R.; Wolk, David A.

    2014-01-01

    We evaluate a fully automatic technique for labeling hippocampal subfields and cortical subregions in the medial temporal lobe (MTL) in in vivo 3 Tesla MRI. The method performs segmentation on a T2-weighted MRI scan with 0.4 × 0.4 × 2.0 mm3 resolution, partial brain coverage, and oblique orientation. Hippocampal subfields, entorhinal cortex, and perirhinal cortex are labeled using a pipeline that combines multi-atlas label fusion and learning-based error correction. In contrast to earlier work on automatic subfield segmentation in T2-weighted MRI (Yushkevich et al., 2010), our approach requires no manual initialization, labels hippocampal subfields over a greater anterior-posterior extent, and labels the perirhinal cortex, which is further subdivided into Brodmann areas 35 and 36. The accuracy of the automatic segmentation relative to manual segmentation is measured using cross-validation in 29 subjects from a study of amnestic Mild Cognitive Impairment (aMCI), and is highest for the dentate gyrus (Dice coefficient is 0.823), CA1 (0.803), perirhinal cortex (0.797) and entorhinal cortex (0.786) labels. A larger cohort of 83 subjects is used to examine the effects of aMCI in the hippocampal region using both subfield volume and regional subfield thickness maps. Most significant differences between aMCI and healthy aging are observed bilaterally in the CA1 subfield and in the left Brodmann area 35. Thickness analysis results are consistent with volumetry, but provide additional regional specificity and suggest non-uniformity in the effects of aMCI on hippocampal subfields and MTL cortical subregions. PMID:25181316

  5. Automated segmentation of the atrial region and fossa ovalis towards computer-aided planning of inter-atrial wall interventions.

    PubMed

    Morais, Pedro; Vilaça, João L; Queirós, Sandro; Marchi, Alberto; Bourier, Felix; Deisenhofer, Isabel; D'hooge, Jan; Tavares, João Manuel R S

    2018-07-01

    Image-fusion strategies have been applied to improve inter-atrial septal (IAS) wall minimally-invasive interventions. Hereto, several landmarks are initially identified on richly-detailed datasets throughout the planning stage and then combined with intra-operative images, enhancing the relevant structures and easing the procedure. Nevertheless, such planning is still performed manually, which is time-consuming and not necessarily reproducible, hampering its regular application. In this article, we present a novel automatic strategy to segment the atrial region (left/right atrium and aortic tract) and the fossa ovalis (FO). The method starts by initializing multiple 3D contours based on an atlas-based approach with global transforms only and refining them to the desired anatomy using a competitive segmentation strategy. The obtained contours are then applied to estimate the FO by evaluating both IAS wall thickness and the expected FO spatial location. The proposed method was evaluated in 41 computed tomography datasets, by comparing the atrial region segmentation and FO estimation results against manually delineated contours. The automatic segmentation method presented a performance similar to the state-of-the-art techniques and a high feasibility, failing only in the segmentation of one aortic tract and of one right atrium. The FO estimation method presented an acceptable result in all the patients with a performance comparable to the inter-observer variability. Moreover, it was faster and fully user-interaction free. Hence, the proposed method proved to be feasible to automatically segment the anatomical models for the planning of IAS wall interventions, making it exceptionally attractive for use in the clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier.

    PubMed

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  7. Application of the concept of dynamic trim control to automatic landing of carrier aircraft. [utilizing digital feedforeward control

    NASA Technical Reports Server (NTRS)

    Smith, G. A.; Meyer, G.

    1980-01-01

    The results of a simulation study of an alternative design concept for an automatic landing control system are presented. The alternative design concept for an automatic landing control system is described. The design concept is the total aircraft flight control system (TAFCOS). TAFCOS is an open loop, feed forward system that commands the proper instantaneous thrust, angle of attack, and roll angle to achieve the forces required to follow the desired trajector. These dynamic trim conditions are determined by an inversion of the aircraft nonlinear force characteristics. The concept was applied to an A-7E aircraft approaching an aircraft carrier. The implementation details with an airborne digital computer are discussed. The automatic carrier landing situation is described. The simulation results are presented for a carrier approach with atmospheric disturbances, an approach with no disturbances, and for tailwind and headwind gusts.

  8. LAMBADA and InflateGRO2: efficient membrane alignment and insertion of membrane proteins for molecular dynamics simulations.

    PubMed

    Schmidt, Thomas H; Kandt, Christian

    2012-10-22

    At the beginning of each molecular dynamics membrane simulation stands the generation of a suitable starting structure which includes the working steps of aligning membrane and protein and seamlessly accommodating the protein in the membrane. Here we introduce two efficient and complementary methods based on pre-equilibrated membrane patches, automating these steps. Using a voxel-based cast of the coarse-grained protein, LAMBADA computes a hydrophilicity profile-derived scoring function based on which the optimal rotation and translation operations are determined to align protein and membrane. Employing an entirely geometrical approach, LAMBADA is independent from any precalculated data and aligns even large membrane proteins within minutes on a regular workstation. LAMBADA is the first tool performing the entire alignment process automatically while providing the user with the explicit 3D coordinates of the aligned protein and membrane. The second tool is an extension of the InflateGRO method addressing the shortcomings of its predecessor in a fully automated workflow. Determining the exact number of overlapping lipids based on the area occupied by the protein and restricting expansion, compression and energy minimization steps to a subset of relevant lipids through automatically calculated and system-optimized operation parameters, InflateGRO2 yields optimal lipid packing and reduces lipid vacuum exposure to a minimum preserving as much of the equilibrated membrane structure as possible. Applicable to atomistic and coarse grain structures in MARTINI format, InflateGRO2 offers high accuracy, fast performance, and increased application flexibility permitting the easy preparation of systems exhibiting heterogeneous lipid composition as well as embedding proteins into multiple membranes. Both tools can be used separately, in combination with other methods, or in tandem permitting a fully automated workflow while retaining a maximum level of usage control and flexibility. To assess the performance of both methods, we carried out test runs using 22 membrane proteins of different size and transmembrane structure.

  9. Convolution neural networks for real-time needle detection and localization in 2D ultrasound.

    PubMed

    Mwikirize, Cosmas; Nosher, John L; Hacihaliloglu, Ilker

    2018-05-01

    We propose a framework for automatic and accurate detection of steeply inserted needles in 2D ultrasound data using convolution neural networks. We demonstrate its application in needle trajectory estimation and tip localization. Our approach consists of a unified network, comprising a fully convolutional network (FCN) and a fast region-based convolutional neural network (R-CNN). The FCN proposes candidate regions, which are then fed to a fast R-CNN for finer needle detection. We leverage a transfer learning paradigm, where the network weights are initialized by training with non-medical images, and fine-tuned with ex vivo ultrasound scans collected during insertion of a 17G epidural needle into freshly excised porcine and bovine tissue at depth settings up to 9 cm and [Formula: see text]-[Formula: see text] insertion angles. Needle detection results are used to accurately estimate needle trajectory from intensity invariant needle features and perform needle tip localization from an intensity search along the needle trajectory. Our needle detection model was trained and validated on 2500 ex vivo ultrasound scans. The detection system has a frame rate of 25 fps on a GPU and achieves 99.6% precision, 99.78% recall rate and an [Formula: see text] score of 0.99. Validation for needle localization was performed on 400 scans collected using a different imaging platform, over a bovine/porcine lumbosacral spine phantom. Shaft localization error of [Formula: see text], tip localization error of [Formula: see text] mm, and a total processing time of 0.58 s were achieved. The proposed method is fully automatic and provides robust needle localization results in challenging scanning conditions. The accurate and robust results coupled with real-time detection and sub-second total processing make the proposed method promising in applications for needle detection and localization during challenging minimally invasive ultrasound-guided procedures.

  10. Dynamic gamma knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Luan, Shuang; Swanson, Nathan; Chen, Zhe; Ma, Lijun

    2009-03-01

    Gamma knife has been the treatment of choice for various brain tumors and functional disorders. Current gamma knife radiosurgery is planned in a 'ball-packing' approach and delivered in a 'step-and-shoot' manner, i.e. it aims to 'pack' the different sized spherical high-dose volumes (called 'shots') into a tumor volume. We have developed a dynamic scheme for gamma knife radiosurgery based on the concept of 'dose-painting' to take advantage of the new robotic patient positioning system on the latest Gamma Knife C™ and Perfexion™ units. In our scheme, the spherical high dose volume created by the gamma knife unit will be viewed as a 3D spherical 'paintbrush', and treatment planning reduces to finding the best route of this 'paintbrush' to 'paint' a 3D tumor volume. Under our dose-painting concept, gamma knife radiosurgery becomes dynamic, where the patient moves continuously under the robotic positioning system. We have implemented a fully automatic dynamic gamma knife radiosurgery treatment planning system, where the inverse planning problem is solved as a traveling salesman problem combined with constrained least-square optimizations. We have also carried out experimental studies of dynamic gamma knife radiosurgery and showed the following. (1) Dynamic gamma knife radiosurgery is ideally suited for fully automatic inverse planning, where high quality radiosurgery plans can be obtained in minutes of computation. (2) Dynamic radiosurgery plans are more conformal than step-and-shoot plans and can maintain a steep dose gradient (around 13% per mm) between the target tumor volume and the surrounding critical structures. (3) It is possible to prescribe multiple isodose lines with dynamic gamma knife radiosurgery, so that the treatment can cover the periphery of the target volume while escalating the dose for high tumor burden regions. (4) With dynamic gamma knife radiosurgery, one can obtain a family of plans representing a tradeoff between the delivery time and the dose distributions, thus giving the clinician one more dimension of flexibility of choosing a plan based on the clinical situations.

  11. Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features.

    PubMed

    Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L

    2015-11-18

    Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.

  12. Fully automatic three-dimensional visualization of intravascular optical coherence tomography images: methods and feasibility in vivo

    PubMed Central

    Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan

    2012-01-01

    Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578

  13. Fully automatic detection of deep white matter T1 hypointense lesions in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Spies, Lothar; Tewes, Anja; Suppa, Per; Opfer, Roland; Buchert, Ralph; Winkler, Gerhard; Raji, Alaleh

    2013-12-01

    A novel method is presented for fully automatic detection of candidate white matter (WM) T1 hypointense lesions in three-dimensional high-resolution T1-weighted magnetic resonance (MR) images. By definition, T1 hypointense lesions have similar intensity as gray matter (GM) and thus appear darker than surrounding normal WM in T1-weighted images. The novel method uses a standard classification algorithm to partition T1-weighted images into GM, WM and cerebrospinal fluid (CSF). As a consequence, T1 hypointense lesions are assigned an increased GM probability by the standard classification algorithm. The GM component image of a patient is then tested voxel-by-voxel against GM component images of a normative database of healthy individuals. Clusters (≥0.1 ml) of significantly increased GM density within a predefined mask of deep WM are defined as lesions. The performance of the algorithm was assessed on voxel level by a simulation study. A maximum dice similarity coefficient of 60% was found for a typical T1 lesion pattern with contrasts ranging from WM to cortical GM, indicating substantial agreement between ground truth and automatic detection. Retrospective application to 10 patients with multiple sclerosis demonstrated that 93 out of 96 T1 hypointense lesions were detected. On average 3.6 false positive T1 hypointense lesions per patient were found. The novel method is promising to support the detection of hypointense lesions in T1-weighted images which warrants further evaluation in larger patient samples.

  14. Affective Evaluations of Exercising: The Role of Automatic-Reflective Evaluation Discrepancy.

    PubMed

    Brand, Ralf; Antoniewicz, Franziska

    2016-12-01

    Sometimes our automatic evaluations do not correspond well with those we can reflect on and articulate. We present a novel approach to the assessment of automatic and reflective affective evaluations of exercising. Based on the assumptions of the associative-propositional processes in evaluation model, we measured participants' automatic evaluations of exercise and then shared this information with them, asked them to reflect on it and rate eventual discrepancy between their reflective evaluation and the assessment of their automatic evaluation. We found that mismatch between self-reported ideal exercise frequency and actual exercise frequency over the previous 14 weeks could be regressed on the discrepancy between a relatively negative automatic and a more positive reflective evaluation. This study illustrates the potential of a dual-process approach to the measurement of evaluative responses and suggests that mistrusting one's negative spontaneous reaction to exercise and asserting a very positive reflective evaluation instead leads to the adoption of inflated exercise goals.

  15. New DTM Extraction Approach from Airborne Images Derived Dsm

    NASA Astrophysics Data System (ADS)

    Mousa, Y. A.; Helmholz, P.; Belton, D.

    2017-05-01

    In this work, a new filtering approach is proposed for a fully automatic Digital Terrain Model (DTM) extraction from very high resolution airborne images derived Digital Surface Models (DSMs). Our approach represents an enhancement of the existing DTM extraction algorithm Multi-directional and Slope Dependent (MSD) by proposing parameters that are more reliable for the selection of ground pixels and the pixelwise classification. To achieve this, four main steps are implemented: Firstly, 8 well-distributed scanlines are used to search for minima as a ground point within a pre-defined filtering window size. These selected ground points are stored with their positions on a 2D surface to create a network of ground points. Then, an initial DTM is created using an interpolation method to fill the gaps in the 2D surface. Afterwards, a pixel to pixel comparison between the initial DTM and the original DSM is performed utilising pixelwise classification of ground and non-ground pixels by applying a vertical height threshold. Finally, the pixels classified as non-ground are removed and the remaining holes are filled. The approach is evaluated using the Vaihingen benchmark dataset provided by the ISPRS working group III/4. The evaluation includes the comparison of our approach, denoted as Network of Ground Points (NGPs) algorithm, with the DTM created based on MSD as well as a reference DTM generated from LiDAR data. The results show that our proposed approach over performs the MSD approach.

  16. Highly automatic quantification of myocardial oedema in patients with acute myocardial infarction using bright blood T2-weighted CMR

    PubMed Central

    2013-01-01

    Background T2-weighted cardiovascular magnetic resonance (CMR) is clinically-useful for imaging the ischemic area-at-risk and amount of salvageable myocardium in patients with acute myocardial infarction (MI). However, to date, quantification of oedema is user-defined and potentially subjective. Methods We describe a highly automatic framework for quantifying myocardial oedema from bright blood T2-weighted CMR in patients with acute MI. Our approach retains user input (i.e. clinical judgment) to confirm the presence of oedema on an image which is then subjected to an automatic analysis. The new method was tested on 25 consecutive acute MI patients who had a CMR within 48 hours of hospital admission. Left ventricular wall boundaries were delineated automatically by variational level set methods followed by automatic detection of myocardial oedema by fitting a Rayleigh-Gaussian mixture statistical model. These data were compared with results from manual segmentation of the left ventricular wall and oedema, the current standard approach. Results The mean perpendicular distances between automatically detected left ventricular boundaries and corresponding manual delineated boundaries were in the range of 1-2 mm. Dice similarity coefficients for agreement (0=no agreement, 1=perfect agreement) between manual delineation and automatic segmentation of the left ventricular wall boundaries and oedema regions were 0.86 and 0.74, respectively. Conclusion Compared to standard manual approaches, the new highly automatic method for estimating myocardial oedema is accurate and straightforward. It has potential as a generic software tool for physicians to use in clinical practice. PMID:23548176

  17. Practical Results from the Application of Model Checking and Test Generation from UML/SysML Models of On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Faria, J. M.; Mahomad, S.; Silva, N.

    2009-05-01

    The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.

  18. A comparison of different methods to implement higher order derivatives of density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Dam, Hubertus J.J.

    Density functional theory is the dominant approach in electronic structure methods today. To calculate properties higher order derivatives of the density functionals are required. These derivatives might be implemented manually,by automatic differentiation, or by symbolic algebra programs. Different authors have cited different reasons for using the particular method of their choice. This paper presents work where all three approaches were used and the strengths and weaknesses of each approach are considered. It is found that all three methods produce code that is suffficiently performanted for practical applications, despite the fact that our symbolic algebra generated code and our automatic differentiationmore » code still have scope for significant optimization. The automatic differentiation approach is the best option for producing readable and maintainable code.« less

  19. Are the Literacy Difficulties That Characterize Developmental Dyslexia Associated with a Failure to Integrate Letters and Speech Sounds?

    ERIC Educational Resources Information Center

    Nash, Hannah M.; Gooch, Debbie; Hulme, Charles; Mahajan, Yatin; McArthur, Genevieve; Steinmetzger, Kurt; Snowling, Margaret J.

    2017-01-01

    The "automatic letter-sound integration hypothesis" (Blomert, [Blomert, L., 2011]) proposes that dyslexia results from a failure to fully integrate letters and speech sounds into automated audio-visual objects. We tested this hypothesis in a sample of English-speaking children with dyslexic difficulties (N = 13) and samples of…

  20. Fully automatic time-window selection using machine learning for global adjoint tomography

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Hill, J.; Lei, W.; Lefebvre, M. P.; Bozdag, E.; Komatitsch, D.; Tromp, J.

    2017-12-01

    Selecting time windows from seismograms such that the synthetic measurements (from simulations) and measured observations are sufficiently close is indispensable in a global adjoint tomography framework. The increasing amount of seismic data collected everyday around the world demands "intelligent" algorithms for seismic window selection. While the traditional FLEXWIN algorithm can be "automatic" to some extent, it still requires both human input and human knowledge or experience, and thus is not deemed to be fully automatic. The goal of intelligent window selection is to automatically select windows based on a learnt engine that is built upon a huge number of existing windows generated through the adjoint tomography project. We have formulated the automatic window selection problem as a classification problem. All possible misfit calculation windows are classified as either usable or unusable. Given a large number of windows with a known selection mode (select or not select), we train a neural network to predict the selection mode of an arbitrary input window. Currently, the five features we extract from the windows are its cross-correlation value, cross-correlation time lag, amplitude ratio between observed and synthetic data, window length, and minimum STA/LTA value. More features can be included in the future. We use these features to characterize each window for training a multilayer perceptron neural network (MPNN). Training the MPNN is equivalent to solve a non-linear optimization problem. We use backward propagation to derive the gradient of the loss function with respect to the weighting matrices and bias vectors and use the mini-batch stochastic gradient method to iteratively optimize the MPNN. Numerical tests show that with a careful selection of the training data and a sufficient amount of training data, we are able to train a robust neural network that is capable of detecting the waveforms in an arbitrary earthquake data with negligible detection error compared to existing selection methods (e.g. FLEXWIN). We will introduce in detail the mathematical formulation of the window-selection-oriented MPNN and show very encouraging results when applying the new algorithm to real earthquake data.

  1. Conceptual design studies of the 5 m terahertz antenna for Dome A, Antarctica

    NASA Astrophysics Data System (ADS)

    Yang, Ji; Zuo, Ying-Xi; Lou, Zheng; Cheng, Jing-Quan; Zhang, Qi-Zhou; Shi, Sheng-Cai; Huang, Jia-Sheng; Yao, Qi-Jun; Wang, Zhong

    2013-12-01

    As the highest, coldest and driest place in Antarctica, Dome A provides exceptionally good observing conditions for ground-based observations over terahertz wavebands. The 5 m Dome A Terahertz Explorer (DATE5) has been proposed to explore new terahertz windows, primarily over wavelengths between 350 and 200 μm. DATE5 will be an open-air, fully-steerable telescope that can function by unmanned operation with remote control. The telescope will be able to endure the harsh polar environment, including high altitude, very low temperature and very low air pressure. The unique specifications, including high accuracies for surface shape and pointing and fully automatic year-around remote operation, along with a stringent limit on the periods of on-site assembly, testing and maintenance, bring a number of challenges to the design, construction, assembly and operation of this telescope. This paper introduces general concepts related to the design of the DATE5 antenna. Beginning from an overview of the environmental and operational limitations, the design specifications and requirements of the DATE5 antenna are listed. From these, major aspects on the conceptual design studies, including the antenna optics, the backup structure, the panels, the subreflector, the mounting and the antenna base structure, are explained. Some critical issues of performance are justified through analyses that use computational fluid dynamics, thermal analysis and de-icing studies, and the proposed approaches for test operation and on-site assembly. Based on these studies, we conclude that the specifications of the DATE5 antenna can generally be met by using enhanced technological approaches.

  2. Advanced signal processing based on support vector regression for lidar applications

    NASA Astrophysics Data System (ADS)

    Gelfusa, M.; Murari, A.; Malizia, A.; Lungaroni, M.; Peluso, E.; Parracino, S.; Talebzadeh, S.; Vega, J.; Gaudio, P.

    2015-10-01

    The LIDAR technique has recently found many applications in atmospheric physics and remote sensing. One of the main issues, in the deployment of systems based on LIDAR, is the filtering of the backscattered signal to alleviate the problems generated by noise. Improvement in the signal to noise ratio is typically achieved by averaging a quite large number (of the order of hundreds) of successive laser pulses. This approach can be effective but presents significant limitations. First of all, it implies a great stress on the laser source, particularly in the case of systems for automatic monitoring of large areas for long periods. Secondly, this solution can become difficult to implement in applications characterised by rapid variations of the atmosphere, for example in the case of pollutant emissions, or by abrupt changes in the noise. In this contribution, a new method for the software filtering and denoising of LIDAR signals is presented. The technique is based on support vector regression. The proposed new method is insensitive to the statistics of the noise and is therefore fully general and quite robust. The developed numerical tool has been systematically compared with the most powerful techniques available, using both synthetic and experimental data. Its performances have been tested for various statistical distributions of the noise and also for other disturbances of the acquired signal such as outliers. The competitive advantages of the proposed method are fully documented. The potential of the proposed approach to widen the capability of the LIDAR technique, particularly in the detection of widespread smoke, is discussed in detail.

  3. Towards fully automated structure-based function prediction in structural genomics: a case study.

    PubMed

    Watson, James D; Sanderson, Steve; Ezersky, Alexandra; Savchenko, Alexei; Edwards, Aled; Orengo, Christine; Joachimiak, Andrzej; Laskowski, Roman A; Thornton, Janet M

    2007-04-13

    As the global Structural Genomics projects have picked up pace, the number of structures annotated in the Protein Data Bank as hypothetical protein or unknown function has grown significantly. A major challenge now involves the development of computational methods to assign functions to these proteins accurately and automatically. As part of the Midwest Center for Structural Genomics (MCSG) we have developed a fully automated functional analysis server, ProFunc, which performs a battery of analyses on a submitted structure. The analyses combine a number of sequence-based and structure-based methods to identify functional clues. After the first stage of the Protein Structure Initiative (PSI), we review the success of the pipeline and the importance of structure-based function prediction. As a dataset, we have chosen all structures solved by the MCSG during the 5 years of the first PSI. Our analysis suggests that two of the structure-based methods are particularly successful and provide examples of local similarity that is difficult to identify using current sequence-based methods. No one method is successful in all cases, so, through the use of a number of complementary sequence and structural approaches, the ProFunc server increases the chances that at least one method will find a significant hit that can help elucidate function. Manual assessment of the results is a time-consuming process and subject to individual interpretation and human error. We present a method based on the Gene Ontology (GO) schema using GO-slims that can allow the automated assessment of hits with a success rate approaching that of expert manual assessment.

  4. Automatic 3d Building Model Generations with Airborne LiDAR Data

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.

  5. Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.

    PubMed

    Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo

    2016-09-01

    In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.

  6. [Progress in the development of insulin pumps and their advanced automatic functions].

    PubMed

    Prázný, Martin

    2015-04-01

    Patients with type 1 diabetes are exposed to permanent burden consisting of careful glucose self-monitoring and precise insulin dosage based on measured glucose values, carbohydrates content in the food and both planned and non-planned physical activity. Erroneous insulin dosing causes frequent both hypoglycemia and hyperglycemia. Hypoglycemia is, however, the most clinically significant complication limiting the optimal diabetes control. Automatic features for insulin dosage integrated in insulin pumps are thus very important. Low glucose suspend (LGS) and Predictive Low Glucose Management (PLGM) use glucose sensor values to prevent hypoglycemia, shorten the time spent in hypoglycemic range and present further step forward to fully closed-loop system of insulin treatment.

  7. Deep convolutional neural network for prostate MR segmentation

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, Lizhi; Fei, Baowei

    2017-03-01

    Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%+/-3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.

  8. Advanced Transport Operating System (ATOPS) Flight Management/Flight Controls (FM/FC) software description

    NASA Technical Reports Server (NTRS)

    Wolverton, David A.; Dickson, Richard W.; Clinedinst, Winston C.; Slominski, Christopher J.

    1993-01-01

    The flight software developed for the Flight Management/Flight Controls (FM/FC) MicroVAX computer used on the Transport Systems Research Vehicle for Advanced Transport Operating Systems (ATOPS) research is described. The FM/FC software computes navigation position estimates, guidance commands, and those commands issued to the control surfaces to direct the aircraft in flight. Various modes of flight are provided for, ranging from computer assisted manual modes to fully automatic modes including automatic landing. A high-level system overview as well as a description of each software module comprising the system is provided. Digital systems diagrams are included for each major flight control component and selected flight management functions.

  9. Evaluation of new data processing algorithms for planar gated ventriculography (MUGA)

    PubMed Central

    Fair, Joanna R.; Telepak, Robert J.

    2009-01-01

    Before implementing one of two new LVEF radionuclide gated ventriculogram (MUGA) systems, the results from 312 consecutive parallel patient studies were evaluated. Each gamma‐camera acquisition was simultaneously processed by semi‐automatic Medasys Pinnacle and by fully automatic and semiautomatic Philips nuclear medicine computer systems. The Philips systems yielded LVEF results within ±5LVEF percentage points of the Medasys system in fewer than half of the studies. The remaining values were higher or lower than those from the long‐used Medasys system. These differences might have changed cancer patient chemotherapy clinical decisions. As a result, our institution elected not to implement either new system. PACS: 87.57.U‐ Nuclear medicine imaging

  10. Automatic Building Abstraction from Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Ley, A.; Hänsch, R.; Hellwich, O.

    2017-09-01

    Multi-view stereo has been shown to be a viable tool for the creation of realistic 3D city models. Nevertheless, it still states significant challenges since it results in dense, but noisy and incomplete point clouds when applied to aerial images. 3D city modelling usually requires a different representation of the 3D scene than these point clouds. This paper applies a fully-automatic pipeline to generate a simplified mesh from a given dense point cloud. The mesh provides a certain level of abstraction as it only consists of relatively large planar and textured surfaces. Thus, it is possible to remove noise, outlier, as well as clutter, while maintaining a high level of accuracy.

  11. fgui: A Method for Automatically Creating Graphical User Interfaces for Command-Line R Packages

    PubMed Central

    Hoffmann, Thomas J.; Laird, Nan M.

    2009-01-01

    The fgui R package is designed for developers of R packages, to help rapidly, and sometimes fully automatically, create a graphical user interface for a command line R package. The interface is built upon the Tcl/Tk graphical interface included in R. The package further facilitates the developer by loading in the help files from the command line functions to provide context sensitive help to the user with no additional effort from the developer. Passing a function as the argument to the routines in the fgui package creates a graphical interface for the function, and further options are available to tweak this interface for those who want more flexibility. PMID:21625291

  12. Fast automatic segmentation of anatomical structures in x-ray computed tomography images to improve fluorescence molecular tomography reconstruction.

    PubMed

    Freyer, Marcus; Ale, Angelique; Schulz, Ralf B; Zientkowska, Marta; Ntziachristos, Vasilis; Englmeier, Karl-Hans

    2010-01-01

    The recent development of hybrid imaging scanners that integrate fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) allows the utilization of x-ray information as image priors for improving optical tomography reconstruction. To fully capitalize on this capacity, we consider a framework for the automatic and fast detection of different anatomic structures in murine XCT images. To accurately differentiate between different structures such as bone, lung, and heart, a combination of image processing steps including thresholding, seed growing, and signal detection are found to offer optimal segmentation performance. The algorithm and its utilization in an inverse FMT scheme that uses priors is demonstrated on mouse images.

  13. Quantification of common carotid artery and descending aorta vessel wall thickness from MR vessel wall imaging using a fully automated processing pipeline.

    PubMed

    Gao, Shan; van 't Klooster, Ronald; Brandts, Anne; Roes, Stijntje D; Alizadeh Dehnavi, Reza; de Roos, Albert; Westenberg, Jos J M; van der Geest, Rob J

    2017-01-01

    To develop and evaluate a method that can fully automatically identify the vessel wall boundaries and quantify the wall thickness for both common carotid artery (CCA) and descending aorta (DAO) from axial magnetic resonance (MR) images. 3T MRI data acquired with T 1 -weighted gradient-echo black-blood imaging sequence from carotid (39 subjects) and aorta (39 subjects) were used to develop and test the algorithm. The vessel wall segmentation was achieved by respectively fitting a 3D cylindrical B-spline surface to the boundaries of lumen and outer wall. The tube-fitting was based on the edge detection performed on the signal intensity (SI) profile along the surface normal. To achieve a fully automated process, Hough Transform (HT) was developed to estimate the lumen centerline and radii for the target vessel. Using the outputs of HT, a tube model for lumen segmentation was initialized and deformed to fit the image data. Finally, lumen segmentation was dilated to initiate the adaptation procedure of outer wall tube. The algorithm was validated by determining: 1) its performance against manual tracing; 2) its interscan reproducibility in quantifying vessel wall thickness (VWT); 3) its capability of detecting VWT difference in hypertensive patients compared with healthy controls. Statistical analysis including Bland-Altman analysis, t-test, and sample size calculation were performed for the purpose of algorithm evaluation. The mean distance between the manual and automatically detected lumen/outer wall contours was 0.00 ± 0.23/0.09 ± 0.21 mm for CCA and 0.12 ± 0.24/0.14 ± 0.35 mm for DAO. No significant difference was observed between the interscan VWT assessment using automated segmentation for both CCA (P = 0.19) and DAO (P = 0.94). Both manual and automated segmentation detected significantly higher carotid (P = 0.016 and P = 0.005) and aortic (P < 0.001 and P = 0.021) wall thickness in the hypertensive patients. A reliable and reproducible pipeline for fully automatic vessel wall quantification was developed and validated on healthy volunteers as well as patients with increased vessel wall thickness. This method holds promise for helping in efficient image interpretation for large-scale cohort studies. 4 J. Magn. Reson. Imaging 2017;45:215-228. © 2016 International Society for Magnetic Resonance in Medicine.

  14. Field performance of a low-cost and fully-automated blood counting system operated by trained and untrained users (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xie, Dengling; Xie, Yanjun; Liu, Peng; Tong, Lieshu; Chu, Kaiqin; Smith, Zachary J.

    2017-02-01

    Current flow-based blood counting devices require expensive and centralized medical infrastructure and are not appropriate for field use. In this paper we report a method to count red blood cells, white blood cells as well as platelets through a low-cost and fully-automated blood counting system. The approach consists of using a compact, custom-built microscope with large field-of-view to record bright-field and fluorescence images of samples that are diluted with a single, stable reagent mixture and counted using automatic algorithms. Sample collection is performed manually using a spring loaded lancet, and volume-metering capillary tubes. The capillaries are then dropped into a tube of pre-measured reagents and gently shaken for 10-30 seconds. The sample is loaded into a measurement chamber and placed on a custom 3D printed platform. Sample translation and focusing is fully automated, and a user has only to press a button for the measurement and analysis to commence. Cost of the system is minimized through the use of custom-designed motorized components. We performed a series of comparative experiments by trained and untrained users on blood from adults and children. We compare the performance of our system, as operated by trained and untrained users, to the clinical gold standard using a Bland-Altman analysis, demonstrating good agreement of our system to the clinical standard. The system's low cost, complete automation, and good field performance indicate that it can be successfully translated for use in low-resource settings where central hematology laboratories are not accessible.

  15. Automatic Generation of Customized, Model Based Information Systems for Operations Management.

    DTIC Science & Technology

    The paper discusses the need for developing a customized, model based system to support management decision making in the field of operations ... management . It provides a critique of the current approaches available, formulates a framework to classify logistics decisions, and suggests an approach for the automatic development of logistics systems. (Author)

  16. Automatic Recommendations for E-Learning Personalization Based on Web Usage Mining Techniques and Information Retrieval

    ERIC Educational Resources Information Center

    Khribi, Mohamed Koutheair; Jemni, Mohamed; Nasraoui, Olfa

    2009-01-01

    In this paper, we describe an automatic personalization approach aiming to provide online automatic recommendations for active learners without requiring their explicit feedback. Recommended learning resources are computed based on the current learner's recent navigation history, as well as exploiting similarities and dissimilarities among…

  17. Automatic Syllabification in English: A Comparison of Different Algorithms

    ERIC Educational Resources Information Center

    Marchand, Yannick; Adsett, Connie R.; Damper, Robert I.

    2009-01-01

    Automatic syllabification of words is challenging, not least because the syllable is not easy to define precisely. Consequently, no accepted standard algorithm for automatic syllabification exists. There are two broad approaches: rule-based and data-driven. The rule-based method effectively embodies some theoretical position regarding the…

  18. Automatic Thesaurus Generation for an Electronic Community System.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; And Others

    1995-01-01

    This research reports an algorithmic approach to the automatic generation of thesauri for electronic community systems. The techniques used include term filtering, automatic indexing, and cluster analysis. The Worm Community System, used by molecular biologists studying the nematode worm C. elegans, was used as the testbed for this research.…

  19. Web-based computer-aided-diagnosis (CAD) system for bone age assessment (BAA) of children

    NASA Astrophysics Data System (ADS)

    Zhang, Aifeng; Uyeda, Joshua; Tsao, Sinchai; Ma, Kevin; Vachon, Linda A.; Liu, Brent J.; Huang, H. K.

    2008-03-01

    Bone age assessment (BAA) of children is a clinical procedure frequently performed in pediatric radiology to evaluate the stage of skeletal maturation based on a left hand and wrist radiograph. The most commonly used standard: Greulich and Pyle (G&P) Hand Atlas was developed 50 years ago and exclusively based on Caucasian population. Moreover, inter- & intra-observer discrepancies using this method create a need of an objective and automatic BAA method. A digital hand atlas (DHA) has been collected with 1,400 hand images of normal children from Asian, African American, Caucasian and Hispanic descends. Based on DHA, a fully automatic, objective computer-aided-diagnosis (CAD) method was developed and it was adapted to specific population. To bring DHA and CAD method to the clinical environment as a useful tool in assisting radiologist to achieve higher accuracy in BAA, a web-based system with direct connection to a clinical site is designed as a novel clinical implementation approach for online and real time BAA. The core of the system, a CAD server receives the image from clinical site, processes it by the CAD method and finally, generates report. A web service publishes the results and radiologists at the clinical site can review it online within minutes. This prototype can be easily extended to multiple clinical sites and will provide the foundation for broader use of the CAD system for BAA.

  20. Performance of wavelet analysis and neural networks for pathological voices identification

    NASA Astrophysics Data System (ADS)

    Salhi, Lotfi; Talbi, Mourad; Abid, Sabeur; Cherif, Adnane

    2011-09-01

    Within the medical environment, diverse techniques exist to assess the state of the voice of the patient. The inspection technique is inconvenient for a number of reasons, such as its high cost, the duration of the inspection, and above all, the fact that it is an invasive technique. This study focuses on a robust, rapid and accurate system for automatic identification of pathological voices. This system employs non-invasive, non-expensive and fully automated method based on hybrid approach: wavelet transform analysis and neural network classifier. First, we present the results obtained in our previous study while using classic feature parameters. These results allow visual identification of pathological voices. Second, quantified parameters drifting from the wavelet analysis are proposed to characterise the speech sample. On the other hand, a system of multilayer neural networks (MNNs) has been developed which carries out the automatic detection of pathological voices. The developed method was evaluated using voice database composed of recorded voice samples (continuous speech) from normophonic or dysphonic speakers. The dysphonic speakers were patients of a National Hospital 'RABTA' of Tunis Tunisia and a University Hospital in Brussels, Belgium. Experimental results indicate a success rate ranging between 75% and 98.61% for discrimination of normal and pathological voices using the proposed parameters and neural network classifier. We also compared the average classification rate based on the MNN, Gaussian mixture model and support vector machines.

  1. A model of traffic signs recognition with convolutional neural network

    NASA Astrophysics Data System (ADS)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  2. Diagnosis of Tempromandibular Disorders Using Local Binary Patterns.

    PubMed

    Haghnegahdar, A A; Kolahi, S; Khojastepour, L; Tajeripour, F

    2018-03-01

    Temporomandibular joint disorder (TMD) might be manifested as structural changes in bone through modification, adaptation or direct destruction. We propose to use Local Binary Pattern (LBP) characteristics and histogram-oriented gradients on the recorded images as a diagnostic tool in TMD assessment. CBCT images of 66 patients (132 joints) with TMD and 66 normal cases (132 joints) were collected and 2 coronal cut prepared from each condyle, although images were limited to head of mandibular condyle. In order to extract features of images, first we use LBP and then histogram of oriented gradients. To reduce dimensionality, the linear algebra Singular Value Decomposition (SVD) is applied to the feature vectors matrix of all images. For evaluation, we used K nearest neighbor (K-NN), Support Vector Machine, Naïve Bayesian and Random Forest classifiers. We used Receiver Operating Characteristic (ROC) to evaluate the hypothesis. K nearest neighbor classifier achieves a very good accuracy (0.9242), moreover, it has desirable sensitivity (0.9470) and specificity (0.9015) results, when other classifiers have lower accuracy, sensitivity and specificity. We proposed a fully automatic approach to detect TMD using image processing techniques based on local binary patterns and feature extraction. K-NN has been the best classifier for our experiments in detecting patients from healthy individuals, by 92.42% accuracy, 94.70% sensitivity and 90.15% specificity. The proposed method can help automatically diagnose TMD at its initial stages.

  3. A Flexible Statechart-to-Model-Checker Translator

    NASA Technical Reports Server (NTRS)

    Rouquette, Nicolas; Dunphy, Julia; Feather, Martin S.

    2000-01-01

    Many current-day software design tools offer some variant of statechart notation for system specification. We, like others, have built an automatic translator from (a subset of) statecharts to a model checker, for use to validate behavioral requirements. Our translator is designed to be flexible. This allows us to quickly adjust the translator to variants of statechart semantics, including problem-specific notational conventions that designers employ. Our system demonstration will be of interest to the following two communities: (1) Potential end-users: Our demonstration will show translation from statecharts created in a commercial UML tool (Rational Rose) to Promela, the input language of Holzmann's model checker SPIN. The translation is accomplished automatically. To accommodate the major variants of statechart semantics, our tool offers user-selectable choices among semantic alternatives. Options for customized semantic variants are also made available. The net result is an easy-to-use tool that operates on a wide range of statechart diagrams to automate the pathway to model-checking input. (2) Other researchers: Our translator embodies, in one tool, ideas and approaches drawn from several sources. Solutions to the major challenges of statechart-to-model-checker translation (e.g., determining which transition(s) will fire, handling of concurrent activities) are retired in a uniform, fully mechanized, setting. The way in which the underlying architecture of the translator itself facilitates flexible and customizable translation will also be evident.

  4. Automatically high accurate and efficient photomask defects management solution for advanced lithography manufacture

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Chen, Lijun; Ma, Lantao; Li, Dejian; Jiang, Wei; Pan, Lihong; Shen, Huiting; Jia, Hongmin; Hsiang, Chingyun; Cheng, Guojie; Ling, Li; Chen, Shijie; Wang, Jun; Liao, Wenkui; Zhang, Gary

    2014-04-01

    Defect review is a time consuming job. Human error makes result inconsistent. The defects located on don't care area would not hurt the yield and no need to review them such as defects on dark area. However, critical area defects can impact yield dramatically and need more attention to review them such as defects on clear area. With decrease in integrated circuit dimensions, mask defects are always thousands detected during inspection even more. Traditional manual or simple classification approaches are unable to meet efficient and accuracy requirement. This paper focuses on automatic defect management and classification solution using image output of Lasertec inspection equipment and Anchor pattern centric image process technology. The number of mask defect found during an inspection is always in the range of thousands or even more. This system can handle large number defects with quick and accurate defect classification result. Our experiment includes Die to Die and Single Die modes. The classification accuracy can reach 87.4% and 93.3%. No critical or printable defects are missing in our test cases. The missing classification defects are 0.25% and 0.24% in Die to Die mode and Single Die mode. This kind of missing rate is encouraging and acceptable to apply on production line. The result can be output and reloaded back to inspection machine to have further review. This step helps users to validate some unsure defects with clear and magnification images when captured images can't provide enough information to make judgment. This system effectively reduces expensive inline defect review time. As a fully inline automated defect management solution, the system could be compatible with current inspection approach and integrated with optical simulation even scoring function and guide wafer level defect inspection.

  5. Shape memory alloy resetable spring lift for pedestrian protection

    NASA Astrophysics Data System (ADS)

    Barnes, Brian M.; Brei, Diann E.; Luntz, Jonathan E.; Strom, Kenneth; Browne, Alan L.; Johnson, Nancy

    2008-03-01

    Pedestrian protection has become an increasingly important aspect of automotive safety with new regulations taking effect around the world. Because it is increasingly difficult to meet these new regulations with traditional passive approaches, active lifts are being explored that increase the "crush zone" between the hood and rigid under-hood components as a means of mitigating the consequences of an impact with a non-occupant. Active lifts, however, are technically challenging because of the simultaneously high forces, stroke and quick timing resulting in most of the current devices being single use. This paper introduces the SMArt (Shape Memory Alloy ReseTable) Spring Lift, an automatically resetable and fully reusable device, which couples conventional standard compression springs to store the energy required for a hood lift, with Shape Memory Alloys actuators to achieve both an ultra high speed release of the spring and automatic reset of the system for multiple uses. Each of the four SMArt Device subsystems, lift, release, lower and reset/dissipate, are individually described. Two identical complete prototypes were fabricated and mounted at the rear corners of the hood, incorporated within a full-scale vehicle testbed at the SMARTT (Smart Material Advanced Research and Technology Transfer) lab at University of Michigan. Full operational cycle testing of a stationary vehicle in a laboratory setting confirms the ultrafast latch release, controlled lift profile, gravity lower to reposition the hood, and spring recompression via the ratchet engine successfully rearming the device for repeat cycles. While this is only a laboratory demonstration and extensive testing and development would be required for transition to a fielded product, this study does indicate that the SMArt Lift has promise as an alternative approach to pedestrian protection.

  6. Influence of metallic artifact filtering on MEG signals for source localization during interictal epileptiform activity

    NASA Astrophysics Data System (ADS)

    Migliorelli, Carolina; Alonso, Joan F.; Romero, Sergio; Mañanas, Miguel A.; Nowak, Rafał; Russi, Antonio

    2016-04-01

    Objective. Medical intractable epilepsy is a common condition that affects 40% of epileptic patients that generally have to undergo resective surgery. Magnetoencephalography (MEG) has been increasingly used to identify the epileptogenic foci through equivalent current dipole (ECD) modeling, one of the most accepted methods to obtain an accurate localization of interictal epileptiform discharges (IEDs). Modeling requires that MEG signals are adequately preprocessed to reduce interferences, a task that has been greatly improved by the use of blind source separation (BSS) methods. MEG recordings are highly sensitive to metallic interferences originated inside the head by implanted intracranial electrodes, dental prosthesis, etc and also coming from external sources such as pacemakers or vagal stimulators. To reduce these artifacts, a BSS-based fully automatic procedure was recently developed and validated, showing an effective reduction of metallic artifacts in simulated and real signals (Migliorelli et al 2015 J. Neural Eng. 12 046001). The main objective of this study was to evaluate its effects in the detection of IEDs and ECD modeling of patients with focal epilepsy and metallic interference. Approach. A comparison between the resulting positions of ECDs was performed: without removing metallic interference; rejecting only channels with large metallic artifacts; and after BSS-based reduction. Measures of dispersion and distance of ECDs were defined to analyze the results. Main results. The relationship between the artifact-to-signal ratio and ECD fitting showed that higher values of metallic interference produced highly scattered dipoles. Results revealed a significant reduction on dispersion using the BSS-based reduction procedure, yielding feasible locations of ECDs in contrast to the other two approaches. Significance. The automatic BSS-based method can be applied to MEG datasets affected by metallic artifacts as a processing step to improve the localization of epileptic foci.

  7. ERDA/NASA 100 kilowatt mod-o wind turbine operations and performance. [at the NASA Plum Brook Station, Ohio

    NASA Technical Reports Server (NTRS)

    Thomas, R. L.; Richards, T. R.

    1977-01-01

    The ERDA/NASA 100 kW Mod-0 wind turbine is operating at the NASA Plum Brook Station near Sandusky, Ohio. The operation of the wind turbine has been fully demonstrated and includes start-up, synchronization to the utility network, blade pitch control for control of power and speed, and shut-down. Also, fully automatic operation has been demonstrated by use of a remote control panel, 50 miles from the site, similar to what a utility dispatcher might use. The operation systems and experience with the wind turbine loads, electrical power and aerodynamic performance obtained from testing are described.

  8. Phase-amplitude imaging: its application to fully automated analysis of magnetic field measurements in laser-produced plasmas.

    PubMed

    Kalal, M; Nugent, K A; Luther-Davies, B

    1987-05-01

    An interferometric technique which enables simultaneous phase and amplitude imaging of optically transparent objects is discussed with respect to its application for the measurement of spontaneous toroidal magnetic fields generated in laser-produced plasmas. It is shown that this technique can replace the normal independent pair of optical systems (interferometry and shadowgraphy) by one system and use computer image processing to recover both the plasma density and magnetic field information with high accuracy. A fully automatic algorithm for the numerical analysis of the data has been developed and its performance demonstrated for the case of simulated as well as experimental data.

  9. Phase-amplitude imaging: its application to fully automated analysis of magnetic field measurements in laser-produced plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalal, M.; Nugent, K.A.; Luther-Davies, B.

    1987-05-01

    An interferometric technique which enables simultaneous phase and amplitude imaging of optically transparent objects is discussed with respect to its application for the measurement of spontaneous toroidal magnetic fields generated in laser-produced plasmas. It is shown that this technique can replace the normal independent pair of optical systems (interferometry and shadowgraphy) by one system and use computer image processing to recover both the plasma density and magnetic field information with high accuracy. A fully automatic algorithm for the numerical analysis of the data has been developed and its performance demonstrated for the case of simulated as well as experimental data.

  10. Association between fully automated MRI-based volumetry of different brain regions and neuropsychological test performance in patients with amnestic mild cognitive impairment and Alzheimer's disease.

    PubMed

    Arlt, Sönke; Buchert, Ralph; Spies, Lothar; Eichenlaub, Martin; Lehmbeck, Jan T; Jahn, Holger

    2013-06-01

    Fully automated magnetic resonance imaging (MRI)-based volumetry may serve as biomarker for the diagnosis in patients with mild cognitive impairment (MCI) or dementia. We aimed at investigating the relation between fully automated MRI-based volumetric measures and neuropsychological test performance in amnestic MCI and patients with mild dementia due to Alzheimer's disease (AD) in a cross-sectional and longitudinal study. In order to assess a possible prognostic value of fully automated MRI-based volumetry for future cognitive performance, the rate of change of neuropsychological test performance over time was also tested for its correlation with fully automated MRI-based volumetry at baseline. In 50 subjects, 18 with amnestic MCI, 21 with mild AD, and 11 controls, neuropsychological testing and T1-weighted MRI were performed at baseline and at a mean follow-up interval of 2.1 ± 0.5 years (n = 19). Fully automated MRI volumetry of the grey matter volume (GMV) was performed using a combined stereotactic normalisation and segmentation approach as provided by SPM8 and a set of pre-defined binary lobe masks. Left and right hippocampus masks were derived from probabilistic cytoarchitectonic maps. Volumes of the inner and outer liquor space were also determined automatically from the MRI. Pearson's test was used for the correlation analyses. Left hippocampal GMV was significantly correlated with performance in memory tasks, and left temporal GMV was related to performance in language tasks. Bilateral frontal, parietal and occipital GMVs were correlated to performance in neuropsychological tests comprising multiple domains. Rate of GMV change in the left hippocampus was correlated with decline of performance in the Boston Naming Test (BNT), Mini-Mental Status Examination, and trail making test B (TMT-B). The decrease of BNT and TMT-A performance over time correlated with the loss of grey matter in multiple brain regions. We conclude that fully automated MRI-based volumetry allows detection of regional grey matter volume loss that correlates with neuropsychological performance in patients with amnestic MCI or mild AD. Because of the high level of automation, MRI-based volumetry may easily be integrated into clinical routine to complement the current diagnostic procedure.

  11. An overview of very high level software design methods

    NASA Technical Reports Server (NTRS)

    Asdjodi, Maryam; Hooper, James W.

    1988-01-01

    Very High Level design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very high level design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very high level design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.

  12. Localization accuracy from automatic and semi-automatic rigid registration of locally-advanced lung cancer targets during image-guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2012-01-15

    Purpose: To evaluate localization accuracy resulting from rigid registration of locally-advanced lung cancer targets using fully automatic and semi-automatic protocols for image-guided radiation therapy. Methods: Seventeen lung cancer patients, fourteen also presenting with involved lymph nodes, received computed tomography (CT) scans once per week throughout treatment under active breathing control. A physician contoured both lung and lymph node targets for all weekly scans. Various automatic and semi-automatic rigid registration techniques were then performed for both individual and simultaneous alignments of the primary gross tumor volume (GTV{sub P}) and involved lymph nodes (GTV{sub LN}) to simulate the localization process in image-guidedmore » radiation therapy. Techniques included ''standard'' (direct registration of weekly images to a planning CT), ''seeded'' (manual prealignment of targets to guide standard registration), ''transitive-based'' (alignment of pretreatment and planning CTs through one or more intermediate images), and ''rereferenced'' (designation of a new reference image for registration). Localization error (LE) was assessed as the residual centroid and border distances between targets from planning and weekly CTs after registration. Results: Initial bony alignment resulted in centroid LE of 7.3 {+-} 5.4 mm and 5.4 {+-} 3.4 mm for the GTV{sub P} and GTV{sub LN}, respectively. Compared to bony alignment, transitive-based and seeded registrations significantly reduced GTV{sub P} centroid LE to 4.7 {+-} 3.7 mm (p = 0.011) and 4.3 {+-} 2.5 mm (p < 1 x 10{sup -3}), respectively, but the smallest GTV{sub P} LE of 2.4 {+-} 2.1 mm was provided by rereferenced registration (p < 1 x 10{sup -6}). Standard registration significantly reduced GTV{sub LN} centroid LE to 3.2 {+-} 2.5 mm (p < 1 x 10{sup -3}) compared to bony alignment, with little additional gain offered by the other registration techniques. For simultaneous target alignment, centroid LE as low as 3.9 {+-} 2.7 mm and 3.8 {+-} 2.3 mm were achieved for the GTV{sub P} and GTV{sub LN}, respectively, using rereferenced registration. Conclusions: Target shape, volume, and configuration changes during radiation therapy limited the accuracy of standard rigid registration for image-guided localization in locally-advanced lung cancer. Significant error reductions were possible using other rigid registration techniques, with LE approaching the lower limit imposed by interfraction target variability throughout treatment.« less

  13. Tools for Rapid Understanding of Malware Code

    DTIC Science & Technology

    2015-05-07

    cloaking techniques. We used three malware detectors, covering a wide spectrum of detection technologies, for our experiments: VirusTotal, an online ...Analysis and Manipulation ( SCAM ), 2014. [9] Babak Yadegari, Brian Johannesmeyer, Benjamin Whitely, and Saumya Debray. A generic approach to automatic...and Manipulation ( SCAM ), 2014. [9] Babak Yadegari, Brian Johannesmeyer, Benjamin Whitely, and Saumya Debray. A generic approach to automatic

  14. Study of Adaptive Mathematical Models for Deriving Automated Pilot Performance Measurement Techniques. Volume II. Appendices. Final Report.

    ERIC Educational Resources Information Center

    Connelly, E. M.; And Others

    A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is described. Ultimately, this approach will allow automatic measurement of pilot performance in a flight simulator or from recorded in-flight data. An efficient method of representing performance data within a computer is…

  15. Meta-Learning Approach for Automatic Parameter Tuning: A Case Study with Educational Datasets

    ERIC Educational Resources Information Center

    Molina, M. M.; Luna, J. M.; Romero, C.; Ventura, S.

    2012-01-01

    This paper proposes to the use of a meta-learning approach for automatic parameter tuning of a well-known decision tree algorithm by using past information about algorithm executions. Fourteen educational datasets were analysed using various combinations of parameter values to examine the effects of the parameter values on accuracy classification.…

  16. Model Checking Satellite Operational Procedures

    NASA Astrophysics Data System (ADS)

    Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri

    2011-08-01

    We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.

  17. Reading the leaves: A comparison of leaf rank and automated areole measurement for quantifying aspects of leaf venation1

    PubMed Central

    Green, Walton A.; Little, Stefan A.; Price, Charles A.; Wing, Scott L.; Smith, Selena Y.; Kotrc, Benjamin; Doria, Gabriela

    2014-01-01

    The reticulate venation that is characteristic of a dicot leaf has excited interest from systematists for more than a century, and from physiological and developmental botanists for decades. The tools of digital image acquisition and computer image analysis, however, are only now approaching the sophistication needed to quantify aspects of the venation network found in real leaves quickly, easily, accurately, and reliably enough to produce biologically meaningful data. In this paper, we examine 120 leaves distributed across vascular plants (representing 118 genera and 80 families) using two approaches: a semiquantitative scoring system called “leaf ranking,” devised by the late Leo Hickey, and an automated image-analysis protocol. In the process of comparing these approaches, we review some methodological issues that arise in trying to quantify a vein network, and discuss the strengths and weaknesses of automatic data collection and human pattern recognition. We conclude that subjective leaf rank provides a relatively consistent, semiquantitative measure of areole size among other variables; that modal areole size is generally consistent across large sections of a leaf lamina; and that both approaches—semiquantitative, subjective scoring; and fully quantitative, automated measurement—have appropriate places in the study of leaf venation. PMID:25202646

  18. Acoustic window planning for ultrasound acquisition.

    PubMed

    Göbl, Rüdiger; Virga, Salvatore; Rackerseder, Julia; Frisch, Benjamin; Navab, Nassir; Hennersperger, Christoph

    2017-06-01

    Autonomous robotic ultrasound has recently gained considerable interest, especially for collaborative applications. Existing methods for acquisition trajectory planning are solely based on geometrical considerations, such as the pose of the transducer with respect to the patient surface. This work aims at establishing acoustic window planning to enable autonomous ultrasound acquisitions of anatomies with restricted acoustic windows, such as the liver or the heart. We propose a fully automatic approach for the planning of acquisition trajectories, which only requires information about the target region as well as existing tomographic imaging data, such as X-ray computed tomography. The framework integrates both geometrical and physics-based constraints to estimate the best ultrasound acquisition trajectories with respect to the available acoustic windows. We evaluate the developed method using virtual planning scenarios based on real patient data as well as for real robotic ultrasound acquisitions on a tissue-mimicking phantom. The proposed method yields superior image quality in comparison with a naive planning approach, while maintaining the necessary coverage of the target. We demonstrate that by taking image formation properties into account acquisition planning methods can outperform naive plannings. Furthermore, we show the need for such planning techniques, since naive approaches are not sufficient as they do not take the expected image quality into account.

  19. Validation and detection of vessel landmarks by using anatomical knowledge

    NASA Astrophysics Data System (ADS)

    Beck, Thomas; Bernhardt, Dominik; Biermann, Christina; Dillmann, Rüdiger

    2010-03-01

    The detection of anatomical landmarks is an important prerequisite to analyze medical images fully automatically. Several machine learning approaches have been proposed to parse 3D CT datasets and to determine the location of landmarks with associated uncertainty. However, it is a challenging task to incorporate high-level anatomical knowledge to improve these classification results. We propose a new approach to validate candidates for vessel bifurcation landmarks which is also applied to systematically search missed and to validate ambiguous landmarks. A knowledge base is trained providing human-readable geometric information of the vascular system, mainly vessel lengths, radii and curvature information, for validation of landmarks and to guide the search process. To analyze the bifurcation area surrounding a vessel landmark of interest, a new approach is proposed which is based on Fast Marching and incorporates anatomical information from the knowledge base. Using the proposed algorithms, an anatomical knowledge base has been generated based on 90 manually annotated CT images containing different parts of the body. To evaluate the landmark validation a set of 50 carotid datasets has been tested in combination with a state of the art landmark detector with excellent results. Beside the carotid bifurcation the algorithm is designed to handle a wide range of vascular landmarks, e.g. celiac, superior mesenteric, renal, aortic, iliac and femoral bifurcation.

  20. Modifying Automatic Approach Action Tendencies in Individuals with Elevated Social Anxiety Symptoms

    PubMed Central

    Taylor, Charles T.; Amir, Nader

    2012-01-01

    Research suggests that social anxiety is associated with a reduced approach orientation for positive social cues. In the current study we examined the effect of experimentally manipulating automatic approach action tendencies on the social behavior of individuals with elevated social anxiety symptoms. The experimental paradigm comprised a computerized Approach Avoidance Task (AAT) in which participants responded to pictures of faces conveying positive or neutral emotional expressions by pulling a joystick toward themselves (approach) or by moving it to the right (sideways control). Participants were randomly assigned to complete an AAT designed to increase approach tendencies for positive social cues by pulling these cues toward themselves on the majority of trials, or to a control condition in which there was no contingency between the arm movement direction and picture type. Following the manipulation, participants took part in a relationship-building task with a trained confederate. Results revealed that participants trained to approach positive stimuli displayed greater social approach behaviors during the social interaction and elicited more positive reactions from their partner compared to participants in the control group. These findings suggest that modifying automatic approach tendencies may facilitate engagement in the types of social approach behaviors that are important for relationship development. PMID:22728645

  1. Automatic Testing and Assessment of Neuroanatomy Using a Digital Brain Atlas: Method and Development of Computer- and Mobile-Based Applications

    ERIC Educational Resources Information Center

    Nowinski, Wieslaw L.; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G.; Marchenko, Yevgen; Volkau, Ihar

    2009-01-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to "Terminologia…

  2. 7 CFR 4290.1810 - Events of default and the Secretary's remedies for RBIC's noncompliance with terms of Debentures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Events of default and the Secretary's remedies for... With Terms of Leverage § 4290.1810 Events of default and the Secretary's remedies for RBIC's... and as if fully set forth in the Debentures. (b) Automatic events of default. The occurrence of one or...

  3. Infrared-enhanced TV for fire detection

    NASA Technical Reports Server (NTRS)

    Hall, J. R.

    1978-01-01

    Closed-circuit television is superior to conventional smoke or heat sensors for detecting fires in large open spaces. Single TV camera scans entire area, whereas many conventional sensors and maze of interconnecting wiring might be required to get same coverage. Camera is monitored by person who would trip alarm if fire were detected, or electronic circuitry could process camera signal for fully-automatic alarm system.

  4. Developing and Evaluating SynctoLearn, a Fully Automatic Video and Transcript Synchronization Tool for EFL Learners

    ERIC Educational Resources Information Center

    Chen, Hao-Jan Howard

    2011-01-01

    Authentic videos are always motivational for foreign language learners. According to the findings of many empirical studies, subtitled L2 videos are particularly useful for foreign language learning. Although there are many authentic English videos available on the Internet, most of these videos do not have subtitles. If subtitles can be added to…

  5. The Laboratory for Individualized Breast Radiodensity Assessment (LIBRA) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    LIBRA is a fully-automatic breast density estimation software solution based on a published algorithm that works on either raw (i.e., “FOR PROCESSING”) or vendor post-processed (i.e., “FOR PRESENTATION”) digital mammography images. LIBRA has been applied to over 30,000 screening exams and is being increasingly utilized in larger studies.

  6. Natural Language Processing: A Tutorial. Revision

    DTIC Science & Technology

    1990-01-01

    English in word-for-word language translations. An oft-repeated (although fictional) anecdote illustrates the ... English by a language translation program, became: " The vodka is strong but 3 the steak is rotten." The point made is that vast amounts of knowledge...are required for effective language translations. The initial goal for Language Translation was "fully-automatic high-quality translation" (FAHOT).

  7. Automatic localization of the left ventricular blood pool centroid in short axis cardiac cine MR images.

    PubMed

    Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; Abdul Aziz, Yang Faridah; Chee, Kok Han; McLaughlin, Robert A

    2018-06-01

    In this paper, we develop and validate an open source, fully automatic algorithm to localize the left ventricular (LV) blood pool centroid in short axis cardiac cine MR images, enabling follow-on automated LV segmentation algorithms. The algorithm comprises four steps: (i) quantify motion to determine an initial region of interest surrounding the heart, (ii) identify potential 2D objects of interest using an intensity-based segmentation, (iii) assess contraction/expansion, circularity, and proximity to lung tissue to score all objects of interest in terms of their likelihood of constituting part of the LV, and (iv) aggregate the objects into connected groups and construct the final LV blood pool volume and centroid. This algorithm was tested against 1140 datasets from the Kaggle Second Annual Data Science Bowl, as well as 45 datasets from the STACOM 2009 Cardiac MR Left Ventricle Segmentation Challenge. Correct LV localization was confirmed in 97.3% of the datasets. The mean absolute error between the gold standard and localization centroids was 2.8 to 4.7 mm, or 12 to 22% of the average endocardial radius. Graphical abstract Fully automated localization of the left ventricular blood pool in short axis cardiac cine MR images.

  8. Feature-based Morphometry

    PubMed Central

    Toews, Matthew; Wells, William M.; Collins, Louis; Arbel, Tal

    2013-01-01

    This paper presents feature-based morphometry (FBM), a new, fully data-driven technique for identifying group-related differences in volumetric imagery. In contrast to most morphometry methods which assume one-to-one correspondence between all subjects, FBM models images as a collage of distinct, localized image features which may not be present in all subjects. FBM thus explicitly accounts for the case where the same anatomical tissue cannot be reliably identified in all subjects due to disease or anatomical variability. A probabilistic model describes features in terms of their appearance, geometry, and relationship to sub-groups of a population, and is automatically learned from a set of subject images and group labels. Features identified indicate group-related anatomical structure that can potentially be used as disease biomarkers or as a basis for computer-aided diagnosis. Scale-invariant image features are used, which reflect generic, salient patterns in the image. Experiments validate FBM clinically in the analysis of normal (NC) and Alzheimer’s (AD) brain images using the freely available OASIS database. FBM automatically identifies known structural differences between NC and AD subjects in a fully data-driven fashion, and obtains an equal error classification rate of 0.78 on new subjects. PMID:20426102

  9. State of the art survey on MRI brain tumor segmentation.

    PubMed

    Gordillo, Nelly; Montseny, Eduard; Sobrevilla, Pilar

    2013-10-01

    Brain tumor segmentation consists of separating the different tumor tissues (solid or active tumor, edema, and necrosis) from normal brain tissues: gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). In brain tumor studies, the existence of abnormal tissues may be easily detectable most of the time. However, accurate and reproducible segmentation and characterization of abnormalities are not straightforward. In the past, many researchers in the field of medical imaging and soft computing have made significant survey in the field of brain tumor segmentation. Both semiautomatic and fully automatic methods have been proposed. Clinical acceptance of segmentation techniques has depended on the simplicity of the segmentation, and the degree of user supervision. Interactive or semiautomatic methods are likely to remain dominant in practice for some time, especially in these applications where erroneous interpretations are unacceptable. This article presents an overview of the most relevant brain tumor segmentation methods, conducted after the acquisition of the image. Given the advantages of magnetic resonance imaging over other diagnostic imaging, this survey is focused on MRI brain tumor segmentation. Semiautomatic and fully automatic techniques are emphasized. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. A fully automatic evolutionary classification of protein folds: Dali Domain Dictionary version 3

    PubMed Central

    Dietmann, Sabine; Park, Jong; Notredame, Cedric; Heger, Andreas; Lappe, Michael; Holm, Liisa

    2001-01-01

    The Dali Domain Dictionary (http://www.ebi.ac.uk/dali/domain) is a numerical taxonomy of all known structures in the Protein Data Bank (PDB). The taxonomy is derived fully automatically from measurements of structural, functional and sequence similarities. Here, we report the extension of the classification to match the traditional four hierarchical levels corresponding to: (i) supersecondary structural motifs (attractors in fold space), (ii) the topology of globular domains (fold types), (iii) remote homologues (functional families) and (iv) homologues with sequence identity above 25% (sequence families). The computational definitions of attractors and functional families are new. In September 2000, the Dali classification contained 10 531 PDB entries comprising 17 101 chains, which were partitioned into five attractor regions, 1375 fold types, 2582 functional families and 3724 domain sequence families. Sequence families were further associated with 99 582 unique homologous sequences in the HSSP database, which increases the number of effectively known structures several-fold. The resulting database contains the description of protein domain architecture, the definition of structural neighbours around each known structure, the definition of structurally conserved cores and a comprehensive library of explicit multiple alignments of distantly related protein families. PMID:11125048

  11. Fully Automated Data Collection Using PAM and the Development of PAM/SPACE Reversible Cassettes

    NASA Astrophysics Data System (ADS)

    Hiraki, Masahiko; Watanabe, Shokei; Chavas, Leonard M. G.; Yamada, Yusuke; Matsugaki, Naohiro; Igarashi, Noriyuki; Wakatsuki, Soichi; Fujihashi, Masahiro; Miki, Kunio; Baba, Seiki; Ueno, Go; Yamamoto, Masaki; Suzuki, Mamoru; Nakagawa, Atsushi; Watanabe, Nobuhisa; Tanaka, Isao

    2010-06-01

    To remotely control and automatically collect data in high-throughput X-ray data collection experiments, the Structural Biology Research Center at the Photon Factory (PF) developed and installed sample exchange robots PAM (PF Automated Mounting system) at PF macromolecular crystallography beamlines; BL-5A, BL-17A, AR-NW12A and AR-NE3A. We developed and installed software that manages the flow of the automated X-ray experiments; sample exchanges, loop-centering and X-ray diffraction data collection. The fully automated data collection function has been available since February 2009. To identify sample cassettes, PAM employs a two-dimensional bar code reader. New beamlines, BL-1A at the Photon Factory and BL32XU at SPring-8, are currently under construction as part of Targeted Proteins Research Program (TPRP) by the Ministry of Education, Culture, Sports, Science and Technology of Japan. However, different robots, PAM and SPACE (SPring-8 Precise Automatic Cryo-sample Exchanger), will be installed at BL-1A and BL32XU, respectively. For the convenience of the users of both facilities, pins and cassettes for PAM and SPACE are developed as part of the TPRP.

  12. Evaluating the efficacy of fully automated approaches for the selection of eye blink ICA components

    PubMed Central

    Pontifex, Matthew B.; Miskovic, Vladimir; Laszlo, Sarah

    2017-01-01

    Independent component analysis (ICA) offers a powerful approach for the isolation and removal of eye blink artifacts from EEG signals. Manual identification of the eye blink ICA component by inspection of scalp map projections, however, is prone to error, particularly when non-artifactual components exhibit topographic distributions similar to the blink. The aim of the present investigation was to determine the extent to which automated approaches for selecting eye blink related ICA components could be utilized to replace manual selection. We evaluated popular blink selection methods relying on spatial features [EyeCatch()], combined stereotypical spatial and temporal features [ADJUST()], and a novel method relying on time-series features alone [icablinkmetrics()] using both simulated and real EEG data. The results of this investigation suggest that all three methods of automatic component selection are able to accurately identify eye blink related ICA components at or above the level of trained human observers. However, icablinkmetrics(), in particular, appears to provide an effective means of automating ICA artifact rejection while at the same time eliminating human errors inevitable during manual component selection and false positive component identifications common in other automated approaches. Based upon these findings, best practices for 1) identifying artifactual components via automated means and 2) reducing the accidental removal of signal-related ICA components are discussed. PMID:28191627

  13. Automatic construction of a recurrent neural network based classifier for vehicle passage detection

    NASA Astrophysics Data System (ADS)

    Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur

    2017-03-01

    Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.

  14. Piloted Simulation Evaluation of a Model-Predictive Automatic Recovery System to Prevent Vehicle Loss of Control on Approach

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Liu, Yuan; Sowers, T. Shane; Owen, A. Karl; Guo, Ten-Huei

    2014-01-01

    This paper describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  15. Automatic segmentation and classification of gestational sac based on mean sac diameter using medical ultrasound image

    NASA Astrophysics Data System (ADS)

    Khazendar, Shan; Farren, Jessica; Al-Assam, Hisham; Sayasneh, Ahmed; Du, Hongbo; Bourne, Tom; Jassim, Sabah A.

    2014-05-01

    Ultrasound is an effective multipurpose imaging modality that has been widely used for monitoring and diagnosing early pregnancy events. Technology developments coupled with wide public acceptance has made ultrasound an ideal tool for better understanding and diagnosing of early pregnancy. The first measurable signs of an early pregnancy are the geometric characteristics of the Gestational Sac (GS). Currently, the size of the GS is manually estimated from ultrasound images. The manual measurement involves multiple subjective decisions, in which dimensions are taken in three planes to establish what is known as Mean Sac Diameter (MSD). The manual measurement results in inter- and intra-observer variations, which may lead to difficulties in diagnosis. This paper proposes a fully automated diagnosis solution to accurately identify miscarriage cases in the first trimester of pregnancy based on automatic quantification of the MSD. Our study shows a strong positive correlation between the manual and the automatic MSD estimations. Our experimental results based on a dataset of 68 ultrasound images illustrate the effectiveness of the proposed scheme in identifying early miscarriage cases with classification accuracies comparable with those of domain experts using K nearest neighbor classifier on automatically estimated MSDs.

  16. Automatic Segmentation of High-Throughput RNAi Fluorescent Cellular Images

    PubMed Central

    Yan, Pingkum; Zhou, Xiaobo; Shah, Mubarak; Wong, Stephen T. C.

    2010-01-01

    High-throughput genome-wide RNA interference (RNAi) screening is emerging as an essential tool to assist biologists in understanding complex cellular processes. The large number of images produced in each study make manual analysis intractable; hence, automatic cellular image analysis becomes an urgent need, where segmentation is the first and one of the most important steps. In this paper, a fully automatic method for segmentation of cells from genome-wide RNAi screening images is proposed. Nuclei are first extracted from the DNA channel by using a modified watershed algorithm. Cells are then extracted by modeling the interaction between them as well as combining both gradient and region information in the Actin and Rac channels. A new energy functional is formulated based on a novel interaction model for segmenting tightly clustered cells with significant intensity variance and specific phenotypes. The energy functional is minimized by using a multiphase level set method, which leads to a highly effective cell segmentation method. Promising experimental results demonstrate that automatic segmentation of high-throughput genome-wide multichannel screening can be achieved by using the proposed method, which may also be extended to other multichannel image segmentation problems. PMID:18270043

  17. Gap-free segmentation of vascular networks with automatic image processing pipeline.

    PubMed

    Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas

    2017-03-01

    Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Fully automatic algorithm for segmenting full human diaphragm in non-contrast CT Images

    NASA Astrophysics Data System (ADS)

    Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2015-03-01

    The diaphragm is a sheet of muscle which separates the thorax from the abdomen and it acts as the most important muscle of the respiratory system. As such, an accurate segmentation of the diaphragm, not only provides key information for functional analysis of the respiratory system, but also can be used for locating other abdominal organs such as the liver. However, diaphragm segmentation is extremely challenging in non-contrast CT images due to the diaphragm's similar appearance to other abdominal organs. In this paper, we present a fully automatic algorithm for diaphragm segmentation in non-contrast CT images. The method is mainly based on a priori knowledge about the human diaphragm anatomy. The diaphragm domes are in contact with the lungs and the heart while its circumference runs along the lumbar vertebrae of the spine as well as the inferior border of the ribs and sternum. As such, the diaphragm can be delineated by segmentation of these organs followed by connecting relevant parts of their outline properly. More specifically, the bottom surface of the lungs and heart, the spine borders and the ribs are delineated, leading to a set of scattered points which represent the diaphragm's geometry. Next, a B-spline filter is used to find the smoothest surface which pass through these points. This algorithm was tested on a noncontrast CT image of a lung cancer patient. The results indicate that there is an average Hausdorff distance of 2.96 mm between the automatic and manually segmented diaphragms which implies a favourable accuracy.

  19. Long-term quality assurance of [(18)F]-fluorodeoxyglucose (FDG) manufacturing.

    PubMed

    Gaspar, Ludovit; Reich, Michal; Kassai, Zoltan; Macasek, Fedor; Rodrigo, Luis; Kruzliak, Peter; Kovac, Peter

    2016-01-01

    Nine years of experience with 2286 commercial synthesis allowed us to deliver comprehensive information on the quality of (18)F-FDG production. Semi-automated FDG production line using Cyclone 18/9 machine (IBA Belgium), TRACERLab MXFDG synthesiser (GE Health, USA) using alkalic hydrolysis, grade "A" isolator with dispensing robotic unit (Tema Sinergie, Italy), and automatic control system under GAMP5 (minus2, Slovakia) was assessed by TQM tools as highly reliable aseptic production line, fully compliant with Good Manufacturing Practice and just-in-time delivery of FDG radiopharmaceutical. Fluoride-18 is received in steady yield and of very high radioactive purity. Synthesis yields exhibited high variance connected probably with quality of disposable cassettes and chemicals sets. Most performance non-conformities within the manufacturing cycle occur at mechanical nodes of dispensing unit. The long-term monitoring of 2286 commercial synthesis indicated high reliability of automatic synthesizers. Shewhart chart and ANOVA analysis showed that minor non-compliances occurred were mostly caused by the declinations of less experienced staff from standard operation procedures, and also by quality of automatic cassettes. Only 15 syntheses were found unfinished and in 4 cases the product was out-of-specification of European Pharmacopoeia. Most vulnerable step of manufacturing was dispensing and filling in grade "A" isolator. Its cleanliness and sterility was fully controlled under the investigated period by applying hydrogen peroxide vapours (VHP). Our experience with quality assurance in the production of [(18)F]-fluorodeoxyglucose (FDG) at production facility of BIONT based on TRACERlab MXFDG production module can be used for bench-marking of the emerging manufacturing and automated manufacturing systems.

  20. Long-term quality assurance of [18F]-fluorodeoxyglucose (FDG) manufacturing

    PubMed Central

    Gaspar, Ludovit; Reich, Michal; Kassai, Zoltan; Macasek, Fedor; Rodrigo, Luis; Kruzliak, Peter; Kovac, Peter

    2016-01-01

    Nine years of experience with 2286 commercial synthesis allowed us to deliver comprehensive information on the quality of 18F-FDG production. Semi-automated FDG production line using Cyclone 18/9 machine (IBA Belgium), TRACERLab MXFDG synthesiser (GE Health, USA) using alkalic hydrolysis, grade “A” isolator with dispensing robotic unit (Tema Sinergie, Italy), and automatic control system under GAMP5 (minus2, Slovakia) was assessed by TQM tools as highly reliable aseptic production line, fully compliant with Good Manufacturing Practice and just-in-time delivery of FDG radiopharmaceutical. Fluoride-18 is received in steady yield and of very high radioactive purity. Synthesis yields exhibited high variance connected probably with quality of disposable cassettes and chemicals sets. Most performance non-conformities within the manufacturing cycle occur at mechanical nodes of dispensing unit. The long-term monitoring of 2286 commercial synthesis indicated high reliability of automatic synthesizers. Shewhart chart and ANOVA analysis showed that minor non-compliances occurred were mostly caused by the declinations of less experienced staff from standard operation procedures, and also by quality of automatic cassettes. Only 15 syntheses were found unfinished and in 4 cases the product was out-of-specification of European Pharmacopoeia. Most vulnerable step of manufacturing was dispensing and filling in grade “A” isolator. Its cleanliness and sterility was fully controlled under the investigated period by applying hydrogen peroxide vapours (VHP). Our experience with quality assurance in the production of [18F]-fluorodeoxyglucose (FDG) at production facility of BIONT based on TRACERlab MXFDG production module can be used for bench-marking of the emerging manufacturing and automated manufacturing systems. PMID:27508102

Top