Sample records for fully automatic process

  1. A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.

    PubMed

    Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2015-12-01

    Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Research and Development of Fully Automatic Alien Smoke Stack and Packaging System

    NASA Astrophysics Data System (ADS)

    Yang, Xudong; Ge, Qingkuan; Peng, Tao; Zuo, Ping; Dong, Weifu

    2017-12-01

    The problem of low efficiency of manual sorting packaging for the current tobacco distribution center, which developed a set of safe efficient and automatic type of alien smoke stack and packaging system. The functions of fully automatic alien smoke stack and packaging system adopt PLC control technology, servo control technology, robot technology, image recognition technology and human-computer interaction technology. The characteristics, principles, control process and key technology of the system are discussed in detail. Through the installation and commissioning fully automatic alien smoke stack and packaging system has a good performance and has completed the requirements for shaped cigarette.

  3. Computer-aided endovascular aortic repair using fully automated two- and three-dimensional fusion imaging.

    PubMed

    Panuccio, Giuseppe; Torsello, Giovanni Federico; Pfister, Markus; Bisdas, Theodosios; Bosiers, Michel J; Torsello, Giovanni; Austermann, Martin

    2016-12-01

    To assess the usability of a fully automated fusion imaging engine prototype, matching preinterventional computed tomography with intraoperative fluoroscopic angiography during endovascular aortic repair. From June 2014 to February 2015, all patients treated electively for abdominal and thoracoabdominal aneurysms were enrolled prospectively. Before each procedure, preoperative planning was performed with a fully automated fusion engine prototype based on computed tomography angiography, creating a mesh model of the aorta. In a second step, this three-dimensional dataset was registered with the two-dimensional intraoperative fluoroscopy. The main outcome measure was the applicability of the fully automated fusion engine. Secondary outcomes were freedom from failure of automatic segmentation or of the automatic registration as well as accuracy of the mesh model, measuring deviations from intraoperative angiography in millimeters, if applicable. Twenty-five patients were enrolled in this study. The fusion imaging engine could be used in successfully 92% of the cases (n = 23). Freedom from failure of automatic segmentation was 44% (n = 11). The freedom from failure of the automatic registration was 76% (n = 19), the median error of the automatic registration process was 0 mm (interquartile range, 0-5 mm). The fully automated fusion imaging engine was found to be applicable in most cases, albeit in several cases a fully automated data processing was not possible, requiring manual intervention. The accuracy of the automatic registration yielded excellent results and promises a useful and simple to use technology. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  4. A 3D THz image processing methodology for a fully integrated, semi-automatic and near real-time operational system

    NASA Astrophysics Data System (ADS)

    Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.

    2012-05-01

    The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.

  5. Automatic classification of seismic events within a regional seismograph network

    NASA Astrophysics Data System (ADS)

    Tiira, Timo; Kortström, Jari; Uski, Marja

    2015-04-01

    A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.

  6. Formal Specification and Automatic Analysis of Business Processes under Authorization Constraints: An Action-Based Approach

    NASA Astrophysics Data System (ADS)

    Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa

    We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.

  7. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    PubMed

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  8. Automatic high throughput empty ISO container verification

    NASA Astrophysics Data System (ADS)

    Chalmers, Alex

    2007-04-01

    Encouraging results are presented for the automatic analysis of radiographic images of a continuous stream of ISO containers to confirm they are truly empty. A series of image processing algorithms are described that process real-time data acquired during the actual inspection of each container and assigns each to one of the classes "empty", "not empty" or "suspect threat". This research is one step towards achieving fully automated analysis of cargo containers.

  9. SU-C-201-04: Quantification of Perfusion Heterogeneity Based On Texture Analysis for Fully Automatic Detection of Ischemic Deficits From Myocardial Perfusion Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Y; Huang, H; Su, T

    Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less

  10. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  11. Fully integrated low-noise readout circuit with automatic offset cancellation loop for capacitive microsensors.

    PubMed

    Song, Haryong; Park, Yunjong; Kim, Hyungseup; Cho, Dong-Il Dan; Ko, Hyoungho

    2015-10-14

    Capacitive sensing schemes are widely used for various microsensors; however, such microsensors suffer from severe parasitic capacitance problems. This paper presents a fully integrated low-noise readout circuit with automatic offset cancellation loop (AOCL) for capacitive microsensors. The output offsets of the capacitive sensing chain due to the parasitic capacitances and process variations are automatically removed using AOCL. The AOCL generates electrically equivalent offset capacitance and enables charge-domain fine calibration using a 10-bit R-2R digital-to-analog converter, charge-transfer switches, and a charge-storing capacitor. The AOCL cancels the unwanted offset by binary-search algorithm based on 10-bit successive approximation register (SAR) logic. The chip is implemented using 0.18 μm complementary metal-oxide-semiconductor (CMOS) process with an active area of 1.76 mm². The power consumption is 220 μW with 3.3 V supply. The input parasitic capacitances within the range of -250 fF to 250 fF can be cancelled out automatically, and the required calibration time is lower than 10 ms.

  12. Fully Integrated Low-Noise Readout Circuit with Automatic Offset Cancellation Loop for Capacitive Microsensors

    PubMed Central

    Song, Haryong; Park, Yunjong; Kim, Hyungseup; Cho, Dong-il Dan; Ko, Hyoungho

    2015-01-01

    Capacitive sensing schemes are widely used for various microsensors; however, such microsensors suffer from severe parasitic capacitance problems. This paper presents a fully integrated low-noise readout circuit with automatic offset cancellation loop (AOCL) for capacitive microsensors. The output offsets of the capacitive sensing chain due to the parasitic capacitances and process variations are automatically removed using AOCL. The AOCL generates electrically equivalent offset capacitance and enables charge-domain fine calibration using a 10-bit R-2R digital-to-analog converter, charge-transfer switches, and a charge-storing capacitor. The AOCL cancels the unwanted offset by binary-search algorithm based on 10-bit successive approximation register (SAR) logic. The chip is implemented using 0.18 μm complementary metal-oxide-semiconductor (CMOS) process with an active area of 1.76 mm2. The power consumption is 220 μW with 3.3 V supply. The input parasitic capacitances within the range of −250 fF to 250 fF can be cancelled out automatically, and the required calibration time is lower than 10 ms. PMID:26473877

  13. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  14. Real-time piloted simulation of fully automatic guidance and control for rotorcraft nap-of-the-earth (NOE) flight following planned profiles

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Pater J.; Jewell, Wayne F.; Coppenbarger, Richard

    1990-01-01

    Developing a single-pilot all-weather NOE capability requires fully automatic NOE navigation and flight control. Innovative guidance and control concepts are being investigated to (1) organize the onboard computer-based storage and real-time updating of NOE terrain profiles and obstacles; (2) define a class of automatic anticipative pursuit guidance algorithms to follow the vertical, lateral, and longitudinal guidance commands; (3) automate a decision-making process for unexpected obstacle avoidance; and (4) provide several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the recorded environment which is then used to determine an appropriate evasive maneuver if a nonconformity is observed. This research effort has been evaluated in both fixed-base and moving-base real-time piloted simulations thereby evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and reengagement of the automatic system.

  15. Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist

    NASA Astrophysics Data System (ADS)

    Tummala, Sudhakar; Dam, Erik B.

    2010-03-01

    Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.

  16. Automatic chemical vapor deposition

    NASA Technical Reports Server (NTRS)

    Kennedy, B. W.

    1981-01-01

    Report reviews chemical vapor deposition (CVD) for processing integrated circuits and describes fully automatic machine for CVD. CVD proceeds at relatively low temperature, allows wide choice of film compositions (including graded or abruptly changing compositions), and deposits uniform films of controllable thickness at fairly high growth rate. Report gives overview of hardware, reactants, and temperature ranges used with CVD machine.

  17. Automatic Coregistration and orthorectification (ACRO) and subsequent mosaicing of NASA high-resolution imagery over the Mars MC11 quadrangle, using HRSC as a baseline

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter; Watson, Gillian; Michael, Gregory; Walter, Sebastian

    2018-02-01

    This work presents the coregistered, orthorectified and mosaiced high-resolution products of the MC11 quadrangle of Mars, which have been processed using novel, fully automatic, techniques. We discuss the development of a pipeline that achieves fully automatic and parameter independent geometric alignment of high-resolution planetary images, starting from raw input images in NASA PDS format and following all required steps to produce a coregistered geotiff image, a corresponding footprint and useful metadata. Additionally, we describe the development of a radiometric calibration technique that post-processes coregistered images to make them radiometrically consistent. Finally, we present a batch-mode application of the developed techniques over the MC11 quadrangle to validate their potential, as well as to generate end products, which are released to the planetary science community, thus assisting in the analysis of Mars static and dynamic features. This case study is a step towards the full automation of signal processing tasks that are essential to increase the usability of planetary data, but currently, require the extensive use of human resources.

  18. TU-H-CAMPUS-JeP1-02: Fully Automatic Verification of Automatically Contoured Normal Tissues in the Head and Neck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarroll, R; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX; Beadle, B

    Purpose: To investigate and validate the use of an independent deformable-based contouring algorithm for automatic verification of auto-contoured structures in the head and neck towards fully automated treatment planning. Methods: Two independent automatic contouring algorithms [(1) Eclipse’s Smart Segmentation followed by pixel-wise majority voting, (2) an in-house multi-atlas based method] were used to create contours of 6 normal structures of 10 head-and-neck patients. After rating by a radiation oncologist, the higher performing algorithm was selected as the primary contouring method, the other used for automatic verification of the primary. To determine the ability of the verification algorithm to detect incorrectmore » contours, contours from the primary method were shifted from 0.5 to 2cm. Using a logit model the structure-specific minimum detectable shift was identified. The models were then applied to a set of twenty different patients and the sensitivity and specificity of the models verified. Results: Per physician rating, the multi-atlas method (4.8/5 point scale, with 3 rated as generally acceptable for planning purposes) was selected as primary and the Eclipse-based method (3.5/5) for verification. Mean distance to agreement and true positive rate were selected as covariates in an optimized logit model. These models, when applied to a group of twenty different patients, indicated that shifts could be detected at 0.5cm (brain), 0.75cm (mandible, cord), 1cm (brainstem, cochlea), or 1.25cm (parotid), with sensitivity and specificity greater than 0.95. If sensitivity and specificity constraints are reduced to 0.9, detectable shifts of mandible and brainstem were reduced by 0.25cm. These shifts represent additional safety margins which might be considered if auto-contours are used for automatic treatment planning without physician review. Conclusion: Automatically contoured structures can be automatically verified. This fully automated process could be used to flag auto-contours for special review or used with safety margins in a fully automatic treatment planning system.« less

  19. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  20. 3D image processing architecture for camera phones

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje

    2011-03-01

    Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.

  1. Automatic detection of articulation disorders in children with cleft lip and palate.

    PubMed

    Maier, Andreas; Hönig, Florian; Bocklet, Tobias; Nöth, Elmar; Stelzle, Florian; Nkenke, Emeka; Schuster, Maria

    2009-11-01

    Speech of children with cleft lip and palate (CLP) is sometimes still disordered even after adequate surgical and nonsurgical therapies. Such speech shows complex articulation disorders, which are usually assessed perceptually, consuming time and manpower. Hence, there is a need for an easy to apply and reliable automatic method. To create a reference for an automatic system, speech data of 58 children with CLP were assessed perceptually by experienced speech therapists for characteristic phonetic disorders at the phoneme level. The first part of the article aims to detect such characteristics by a semiautomatic procedure and the second to evaluate a fully automatic, thus simple, procedure. The methods are based on a combination of speech processing algorithms. The semiautomatic method achieves moderate to good agreement (kappa approximately 0.6) for the detection of all phonetic disorders. On a speaker level, significant correlations between the perceptual evaluation and the automatic system of 0.89 are obtained. The fully automatic system yields a correlation on the speaker level of 0.81 to the perceptual evaluation. This correlation is in the range of the inter-rater correlation of the listeners. The automatic speech evaluation is able to detect phonetic disorders at an experts'level without any additional human postprocessing.

  2. Fully automatic oil spill detection from COSMO-SkyMed imagery using a neural network approach

    NASA Astrophysics Data System (ADS)

    Avezzano, Ruggero G.; Del Frate, Fabio; Latini, Daniele

    2012-09-01

    The increased amount of available Synthetic Aperture Radar (SAR) images acquired over the ocean represents an extraordinary potential for improving oil spill detection activities. On the other side this involves a growing workload on the operators at analysis centers. In addition, even if the operators go through extensive training to learn manual oil spill detection, they can provide different and subjective responses. Hence, the upgrade and improvements of algorithms for automatic detection that can help in screening the images and prioritizing the alarms are of great benefit. In the framework of an ASI Announcement of Opportunity for the exploitation of COSMO-SkyMed data, a research activity (ASI contract L/020/09/0) aiming at studying the possibility to use neural networks architectures to set up fully automatic processing chains using COSMO-SkyMed imagery has been carried out and results are presented in this paper. The automatic identification of an oil spill is seen as a three step process based on segmentation, feature extraction and classification. We observed that a PCNN (Pulse Coupled Neural Network) was capable of providing a satisfactory performance in the different dark spots extraction, close to what it would be produced by manual editing. For the classification task a Multi-Layer Perceptron (MLP) Neural Network was employed.

  3. Laser Balancing

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Mechanical Technology, Incorporated developed a fully automatic laser machining process that allows more precise balancing removes metal faster, eliminates excess metal removal and other operator induced inaccuracies, and provides significant reduction in balancing time. Manufacturing costs are reduced as a result.

  4. Automatization of hydrodynamic modelling in a Floreon+ system

    NASA Astrophysics Data System (ADS)

    Ronovsky, Ales; Kuchar, Stepan; Podhoranyi, Michal; Vojtek, David

    2017-07-01

    The paper describes fully automatized hydrodynamic modelling as a part of the Floreon+ system. The main purpose of hydrodynamic modelling in the disaster management is to provide an accurate overview of the hydrological situation in a given river catchment. Automatization of the process as a web service could provide us with immediate data based on extreme weather conditions, such as heavy rainfall, without the intervention of an expert. Such a service can be used by non scientific users such as fire-fighter operators or representatives of a military service organizing evacuation during floods or river dam breaks. The paper describes the whole process beginning with a definition of a schematization necessary for hydrodynamic model, gathering of necessary data and its processing for a simulation, the model itself and post processing of a result and visualization on a web service. The process is demonstrated on a real data collected during floods in our Moravian-Silesian region in 2010.

  5. Fully automatic guidance and control for rotorcraft nap-of-the-Earth flight following planned profiles. Volume 1: Real-time piloted simulation

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.

    1991-01-01

    Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-based and moving-based real-time piloted simulations, thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.

  6. Fully Automatic Guidance and Control for Rotorcraft Nap-of-the-earth Flight Following Planned Profiles. Volume 2: Mathematical Model

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.

    1991-01-01

    Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-base and moving-base real-time piloted simulations; thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.

  7. Pipeline Reduction of Binary Light Curves from Large-Scale Surveys

    NASA Astrophysics Data System (ADS)

    Prša, Andrej; Zwitter, Tomaž

    2007-08-01

    One of the most important changes in observational astronomy of the 21st Century is a rapid shift from classical object-by-object observations to extensive automatic surveys. As CCD detectors are getting better and their prices are getting lower, more and more small and medium-size observatories are refocusing their attention to detection of stellar variability through systematic sky-scanning missions. This trend is additionally powered by the success of pioneering surveys such as ASAS, DENIS, OGLE, TASS, their space counterpart Hipparcos and others. Such surveys produce massive amounts of data and it is not at all clear how these data are to be reduced and analysed. This is especially striking in the eclipsing binary (EB) field, where most frequently used tools are optimized for object-by-object analysis. A clear need for thorough, reliable and fully automated approaches to modeling and analysis of EB data is thus obvious. This task is very difficult because of limited data quality, non-uniform phase coverage and parameter degeneracy. The talk will review recent advancements in putting together semi-automatic and fully automatic pipelines for EB data processing. Automatic procedures have already been used to process the Hipparcos data, LMC/SMC observations, OGLE and ASAS catalogs etc. We shall discuss the advantages and shortcomings of these procedures and overview the current status of automatic EB modeling pipelines for the upcoming missions such as CoRoT, Kepler, Gaia and others.

  8. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  9. Emotion and sex of facial stimuli modulate conditional automaticity in behavioral and neuronal interference in healthy men.

    PubMed

    Kohn, Nils; Fernández, Guillén

    2017-12-06

    Our surrounding provides a host of sensory input, which we cannot fully process without streamlining and automatic processing. Levels of automaticity differ for different cognitive and affective processes. Situational and contextual interactions between cognitive and affective processes in turn influence the level of automaticity. Automaticity can be measured by interference in Stroop tasks. We applied an emotional version of the Stroop task to investigate how stress as a contextual factor influences the affective valence-dependent level of automaticity. 120 young, healthy men were investigated for behavioral and brain interference following a stress induction or control procedure in a counter-balanced cross-over-design. Although Stroop interference was always observed, sex and emotion of the face strongly modulated interference, which was larger for fearful and male faces. These effects suggest higher automaticity when processing happy and also female faces. Supporting behavioral patterns, brain data show lower interference related brain activity in executive control related regions in response to happy and female faces. In the absence of behavioral stress effects, congruent compared to incongruent trials (reverse interference) showed little to no deactivation under stress in response to happy female and fearful male trials. These congruency effects are potentially based on altered context- stress-related facial processing that interact with sex-emotion stereotypes. Results indicate that sex and facial emotion modulate Stroop interference in brain and behavior. These effects can be explained by altered response difficulty as a consequence of the contextual and stereotype related modulation of automaticity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Fully automatic measurements of axial vertebral rotation for assessment of spinal deformity in idiopathic scoliosis

    NASA Astrophysics Data System (ADS)

    Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Vavruch, Ludvig; Tropp, Hans; Knutsson, Hans

    2013-03-01

    Reliable measurements of spinal deformities in idiopathic scoliosis are vital, since they are used for assessing the degree of scoliosis, deciding upon treatment and monitoring the progression of the disease. However, commonly used two dimensional methods (e.g. the Cobb angle) do not fully capture the three dimensional deformity at hand in scoliosis, of which axial vertebral rotation (AVR) is considered to be of great importance. There are manual methods for measuring the AVR, but they are often time-consuming and related with a high intra- and inter-observer variability. In this paper, we present a fully automatic method for estimating the AVR in images from computed tomography. The proposed method is evaluated on four scoliotic patients with 17 vertebrae each and compared with manual measurements performed by three observers using the standard method by Aaro-Dahlborn. The comparison shows that the difference in measured AVR between automatic and manual measurements are on the same level as the inter-observer difference. This is further supported by a high intraclass correlation coefficient (0.971-0.979), obtained when comparing the automatic measurements with the manual measurements of each observer. Hence, the provided results and the computational performance, only requiring approximately 10 to 15 s for processing an entire volume, demonstrate the potential clinical value of the proposed method.

  11. An Approach for Automatic Generation of Adaptive Hypermedia in Education with Multilingual Knowledge Discovery Techniques

    ERIC Educational Resources Information Center

    Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana

    2007-01-01

    This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…

  12. Strategies for automatic processing of large aftershock sequences

    NASA Astrophysics Data System (ADS)

    Kvaerna, T.; Gibbons, S. J.

    2017-12-01

    Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.

  13. The automaticity of face perception is influenced by familiarity.

    PubMed

    Yan, Xiaoqian; Young, Andrew W; Andrews, Timothy J

    2017-10-01

    In this study, we explore the automaticity of encoding for different facial characteristics and ask whether it is influenced by face familiarity. We used a matching task in which participants had to report whether the gender, identity, race, or expression of two briefly presented faces was the same or different. The task was made challenging by allowing nonrelevant dimensions to vary across trials. To test for automaticity, we compared performance on trials in which the task instruction was given at the beginning of the trial, with trials in which the task instruction was given at the end of the trial. As a strong criterion for automatic processing, we reasoned that if perception of a given characteristic (gender, race, identity, or emotion) is fully automatic, the timing of the instruction should not influence performance. We compared automaticity for the perception of familiar and unfamiliar faces. Performance with unfamiliar faces was higher for all tasks when the instruction was given at the beginning of the trial. However, we found a significant interaction between instruction and task with familiar faces. Accuracy of gender and identity judgments to familiar faces was the same regardless of whether the instruction was given before or after the trial, suggesting automatic processing of these properties. In contrast, there was an effect of instruction for judgments of expression and race to familiar faces. These results show that familiarity enhances the automatic processing of some types of facial information more than others.

  14. A fully automatic processing chain to produce Burn Scar Mapping products, using the full Landsat archive over Greece

    NASA Astrophysics Data System (ADS)

    Kontoes, Charalampos; Papoutsis, Ioannis; Herekakis, Themistoklis; Michail, Dimitrios; Ieronymidi, Emmanuela

    2013-04-01

    Remote sensing tools for the accurate, robust and timely assessment of the damages inflicted by forest wildfires provide information that is of paramount importance to public environmental agencies and related stakeholders before, during and after the crisis. The Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing of the National Observatory of Athens (IAASARS/NOA) has developed a fully automatic single and/or multi date processing chain that takes as input archived Landsat 4, 5 or 7 raw images and produces precise diachronic burnt area polygons and damage assessments over the Greek territory. The methodology consists of three fully automatic stages: 1) the pre-processing stage where the metadata of the raw images are extracted, followed by the application of the LEDAPS software platform for calibration and mask production and the Automated Precise Orthorectification Package, developed by NASA, for image geo-registration and orthorectification, 2) the core-BSM (Burn Scar Mapping) processing stage which incorporates a published classification algorithm based on a series of physical indexes, the application of two filters for noise removal using graph-based techniques and the grouping of pixels classified as burnt to form the appropriate pixels clusters before proceeding to conversion from raster to vector, and 3) the post-processing stage where the products are thematically refined and enriched using auxiliary GIS layers (underlying land cover/use, administrative boundaries, etc.) and human logic/evidence to suppress false alarms and omission errors. The established processing chain has been successfully applied to the entire archive of Landsat imagery over Greece spanning from 1984 to 2012, which has been collected and managed in IAASARS/NOA. The number of full Landsat frames that were subject of process in the framework of the study was 415. These burn scar mapping products are generated for the first time to such a temporal and spatial extent and are ideal to use in further environmental time series analyzes, production of statistical indexes (frequency, geographical distribution and number of fires per prefecture) and applications, including change detection and climate change models, urban planning, correlation with manmade activities, etc.

  15. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  16. Evaluation of new data processing algorithms for planar gated ventriculography (MUGA)

    PubMed Central

    Fair, Joanna R.; Telepak, Robert J.

    2009-01-01

    Before implementing one of two new LVEF radionuclide gated ventriculogram (MUGA) systems, the results from 312 consecutive parallel patient studies were evaluated. Each gamma‐camera acquisition was simultaneously processed by semi‐automatic Medasys Pinnacle and by fully automatic and semiautomatic Philips nuclear medicine computer systems. The Philips systems yielded LVEF results within ±5LVEF percentage points of the Medasys system in fewer than half of the studies. The remaining values were higher or lower than those from the long‐used Medasys system. These differences might have changed cancer patient chemotherapy clinical decisions. As a result, our institution elected not to implement either new system. PACS: 87.57.U‐ Nuclear medicine imaging

  17. Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor

    NASA Astrophysics Data System (ADS)

    Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso

    2018-04-01

    Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.

  18. The Automatic Conservative: Ideology-Based Attentional Asymmetries in the Processing of Valenced Information

    PubMed Central

    Carraro, Luciana; Castelli, Luigi; Macchiella, Claudia

    2011-01-01

    Research has widely explored the differences between conservatives and liberals, and it has been also recently demonstrated that conservatives display different reactions toward valenced stimuli. However, previous studies have not yet fully illuminated the cognitive underpinnings of these differences. In the current work, we argued that political ideology is related to selective attention processes, so that negative stimuli are more likely to automatically grab the attention of conservatives as compared to liberals. In Experiment 1, we demonstrated that negative (vs. positive) information impaired the performance of conservatives, more than liberals, in an Emotional Stroop Task. This finding was confirmed in Experiment 2 and in Experiment 3 employing a Dot-Probe Task, demonstrating that threatening stimuli were more likely to attract the attention of conservatives. Overall, results support the conclusion that people embracing conservative views of the world display an automatic selective attention for negative stimuli. PMID:22096486

  19. Automated feature detection and identification in digital point-ordered signals

    DOEpatents

    Oppenlander, Jane E.; Loomis, Kent C.; Brudnoy, David M.; Levy, Arthur J.

    1998-01-01

    A computer-based automated method to detect and identify features in digital point-ordered signals. The method is used for processing of non-destructive test signals, such as eddy current signals obtained from calibration standards. The signals are first automatically processed to remove noise and to determine a baseline. Next, features are detected in the signals using mathematical morphology filters. Finally, verification of the features is made using an expert system of pattern recognition methods and geometric criteria. The method has the advantage that standard features can be, located without prior knowledge of the number or sequence of the features. Further advantages are that standard features can be differentiated from irrelevant signal features such as noise, and detected features are automatically verified by parameters extracted from the signals. The method proceeds fully automatically without initial operator set-up and without subjective operator feature judgement.

  20. Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data

    NASA Astrophysics Data System (ADS)

    Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.

    2015-04-01

    In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.

  1. A fully-automatic fast segmentation of the sub-basal layer nerves in corneal images.

    PubMed

    Guimarães, Pedro; Wigdahl, Jeff; Poletti, Enea; Ruggeri, Alfredo

    2014-01-01

    Corneal nerves changes have been linked to damage caused by surgical interventions or prolonged contact lens wear. Furthermore nerve tortuosity has been shown to correlate with the severity of diabetic neuropathy. For these reasons there has been an increasing interest on the analysis of these structures. In this work we propose a novel, robust, and fast fully automatic algorithm capable of tracing the sub-basal plexus nerves from human corneal confocal images. We resort to logGabor filters and support vector machines to trace the corneal nerves. The proposed algorithm traced most of the corneal nerves correctly (sensitivity of 0.88 ± 0.06 and false discovery rate of 0.08 ± 0.06). The displayed performance is comparable to a human grader. We believe that the achieved processing time (0.661 ± 0.07 s) and tracing quality are major advantages for the daily clinical practice.

  2. Feasibility Study on Fully Automatic High Quality Translation: Volume II. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winifred P.; Stachowitz, Rolf

    This second volume of a two-volume report on a fully automatic high quality translation (FAHQT) contains relevant papers contributed by specialists on the topic of machine translation. The papers presented here cover such topics as syntactical analysis in transformational grammar and in machine translation, lexical features in translation and…

  3. A novel automatic segmentation workflow of axial breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Besbes, Feten; Gargouri, Norhene; Damak, Alima; Sellami, Dorra

    2018-04-01

    In this paper we propose a novel process of a fully automatic breast tissue segmentation which is independent from expert calibration and contrast. The proposed algorithm is composed by two major steps. The first step consists in the detection of breast boundaries. It is based on image content analysis and Moore-Neighbour tracing algorithm. As a processing step, Otsu thresholding and neighbors algorithm are applied. Then, the external area of breast is removed to get an approximated breast region. The second preprocessing step is the delineation of the chest wall which is considered as the lowest cost path linking three key points; These points are located automatically at the breast. They are respectively, the left and right boundary points and the middle upper point placed at the sternum region using statistical method. For the minimum cost path search problem, we resolve it through Dijkstra algorithm. Evaluation results reveal the robustness of our process face to different breast densities, complex forms and challenging cases. In fact, the mean overlap between manual segmentation and automatic segmentation through our method is 96.5%. A comparative study shows that our proposed process is competitive and faster than existing methods. The segmentation of 120 slices with our method is achieved at least in 20.57+/-5.2s.

  4. Fully automated, real-time 3D ultrasound segmentation to estimate first trimester placental volume using deep learning.

    PubMed

    Looney, Pádraig; Stevenson, Gordon N; Nicolaides, Kypros H; Plasencia, Walter; Molloholli, Malid; Natsis, Stavros; Collins, Sally L

    2018-06-07

    We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. Image analysis tools to estimate organ volume do exist but are too time consuming and operator dependant. Fully automating the segmentation process would potentially allow the use of placental volume to screen for increased risk of pregnancy complications. The placenta was segmented from 2,393 first trimester 3D-US volumes using a semiautomated technique. This was quality controlled by three operators to produce the "ground-truth" data set. A fully convolutional neural network (OxNNet) was trained using this ground-truth data set to automatically segment the placenta. OxNNet delivered state-of-the-art automatic segmentation. The effect of training set size on the performance of OxNNet demonstrated the need for large data sets. The clinical utility of placental volume was tested by looking at predictions of small-for-gestational-age babies at term. The receiver-operating characteristics curves demonstrated almost identical results between OxNNet and the ground-truth). Our results demonstrated good similarity to the ground-truth and almost identical clinical results for the prediction of SGA.

  5. Automatic and semi-automatic approaches for arteriolar-to-venular computation in retinal photographs

    NASA Astrophysics Data System (ADS)

    Mendonça, Ana Maria; Remeseiro, Beatriz; Dashtbozorg, Behdad; Campilho, Aurélio

    2017-03-01

    The Arteriolar-to-Venular Ratio (AVR) is a popular dimensionless measure which allows the assessment of patients' condition for the early diagnosis of different diseases, including hypertension and diabetic retinopathy. This paper presents two new approaches for AVR computation in retinal photographs which include a sequence of automated processing steps: vessel segmentation, caliber measurement, optic disc segmentation, artery/vein classification, region of interest delineation, and AVR calculation. Both approaches have been tested on the INSPIRE-AVR dataset, and compared with a ground-truth provided by two medical specialists. The obtained results demonstrate the reliability of the fully automatic approach which provides AVR ratios very similar to at least one of the observers. Furthermore, the semi-automatic approach, which includes the manual modification of the artery/vein classification if needed, allows to significantly reduce the error to a level below the human error.

  6. Automatic inference of multicellular regulatory networks using informative priors.

    PubMed

    Sun, Xiaoyun; Hong, Pengyu

    2009-01-01

    To fully understand the mechanisms governing animal development, computational models and algorithms are needed to enable quantitative studies of the underlying regulatory networks. We developed a mathematical model based on dynamic Bayesian networks to model multicellular regulatory networks that govern cell differentiation processes. A machine-learning method was developed to automatically infer such a model from heterogeneous data. We show that the model inference procedure can be greatly improved by incorporating interaction data across species. The proposed approach was applied to C. elegans vulval induction to reconstruct a model capable of simulating C. elegans vulval induction under 73 different genetic conditions.

  7. Feasibility Study on Fully Automatic High Quality Translation: Volume I. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winifred P.; Stachowitz, Rolf

    The object of this theoretical inquiry is to examine the controversial issue of a fully automatic high quality translation (FAHQT) in the light of past and projected advances in linguistic theory and hardware/software capability. This first volume of a two-volume report discusses the requirements of translation and aspects of human and machine…

  8. Managing computer-controlled operations

    NASA Technical Reports Server (NTRS)

    Plowden, J. B.

    1985-01-01

    A detailed discussion of Launch Processing System Ground Software Production is presented to establish the interrelationships of firing room resource utilization, configuration control, system build operations, and Shuttle data bank management. The production of a test configuration identifier is traced from requirement generation to program development. The challenge of the operational era is to implement fully automated utilities to interface with a resident system build requirements document to eliminate all manual intervention in the system build operations. Automatic update/processing of Shuttle data tapes will enhance operations during multi-flow processing.

  9. The ACODEA Framework: Developing Segmentation and Classification Schemes for Fully Automatic Analysis of Online Discussions

    ERIC Educational Resources Information Center

    Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank

    2012-01-01

    Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,…

  10. Fully automatic cervical vertebrae segmentation framework for X-ray images.

    PubMed

    Al Arif, S M Masudur Rahman; Knapp, Karen; Slabaugh, Greg

    2018-04-01

    The cervical spine is a highly flexible anatomy and therefore vulnerable to injuries. Unfortunately, a large number of injuries in lateral cervical X-ray images remain undiagnosed due to human errors. Computer-aided injury detection has the potential to reduce the risk of misdiagnosis. Towards building an automatic injury detection system, in this paper, we propose a deep learning-based fully automatic framework for segmentation of cervical vertebrae in X-ray images. The framework first localizes the spinal region in the image using a deep fully convolutional neural network. Then vertebra centers are localized using a novel deep probabilistic spatial regression network. Finally, a novel shape-aware deep segmentation network is used to segment the vertebrae in the image. The framework can take an X-ray image and produce a vertebrae segmentation result without any manual intervention. Each block of the fully automatic framework has been trained on a set of 124 X-ray images and tested on another 172 images, all collected from real-life hospital emergency rooms. A Dice similarity coefficient of 0.84 and a shape error of 1.69 mm have been achieved. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. An automatic quantification system for MS lesions with integrated DICOM structured reporting (DICOM-SR) for implementation within a clinical environment

    NASA Astrophysics Data System (ADS)

    Jacobs, Colin; Ma, Kevin; Moin, Paymann; Liu, Brent

    2010-03-01

    Multiple Sclerosis (MS) is a common neurological disease affecting the central nervous system characterized by pathologic changes including demyelination and axonal injury. MR imaging has become the most important tool to evaluate the disease progression of MS which is characterized by the occurrence of white matter lesions. Currently, radiologists evaluate and assess the multiple sclerosis lesions manually by estimating the lesion volume and amount of lesions. This process is extremely time-consuming and sensitive to intra- and inter-observer variability. Therefore, there is a need for automatic segmentation of the MS lesions followed by lesion quantification. We have developed a fully automatic segmentation algorithm to identify the MS lesions. The segmentation algorithm is accelerated by parallel computing using Graphics Processing Units (GPU) for practical implementation into a clinical environment. Subsequently, characterized quantification of the lesions is performed. The quantification results, which include lesion volume and amount of lesions, are stored in a structured report together with the lesion location in the brain to establish a standardized representation of the disease progression of the patient. The development of this structured report in collaboration with radiologists aims to facilitate outcome analysis and treatment assessment of the disease and will be standardized based on DICOM-SR. The results can be distributed to other DICOM-compliant clinical systems that support DICOM-SR such as PACS. In addition, the implementation of a fully automatic segmentation and quantification system together with a method for storing, distributing, and visualizing key imaging and informatics data in DICOM-SR for MS lesions improves the clinical workflow of radiologists and visualizations of the lesion segmentations and will provide 3-D insight into the distribution of lesions in the brain.

  12. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  13. Assessing the impact of graphical quality on automatic text recognition in digital maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang

    2016-08-01

    Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.

  14. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  15. Online fully automated three-dimensional surface reconstruction of unknown objects

    NASA Astrophysics Data System (ADS)

    Khalfaoui, Souhaiel; Aigueperse, Antoine; Fougerolle, Yohan; Seulin, Ralph; Fofi, David

    2015-04-01

    This paper presents a novel scheme for automatic and intelligent 3D digitization using robotic cells. The advantage of our procedure is that it is generic since it is not performed for a specific scanning technology. Moreover, it is not dependent on the methods used to perform the tasks associated with each elementary process. The comparison of results between manual and automatic scanning of complex objects shows that our digitization strategy is very efficient and faster than trained experts. The 3D models of the different objects are obtained with a strongly reduced number of acquisitions while moving efficiently the ranging device.

  16. Infrared-enhanced TV for fire detection

    NASA Technical Reports Server (NTRS)

    Hall, J. R.

    1978-01-01

    Closed-circuit television is superior to conventional smoke or heat sensors for detecting fires in large open spaces. Single TV camera scans entire area, whereas many conventional sensors and maze of interconnecting wiring might be required to get same coverage. Camera is monitored by person who would trip alarm if fire were detected, or electronic circuitry could process camera signal for fully-automatic alarm system.

  17. The Laboratory for Individualized Breast Radiodensity Assessment (LIBRA) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    LIBRA is a fully-automatic breast density estimation software solution based on a published algorithm that works on either raw (i.e., “FOR PROCESSING”) or vendor post-processed (i.e., “FOR PRESENTATION”) digital mammography images. LIBRA has been applied to over 30,000 screening exams and is being increasingly utilized in larger studies.

  18. Natural Language Processing: A Tutorial. Revision

    DTIC Science & Technology

    1990-01-01

    English in word-for-word language translations. An oft-repeated (although fictional) anecdote illustrates the ... English by a language translation program, became: " The vodka is strong but 3 the steak is rotten." The point made is that vast amounts of knowledge...are required for effective language translations. The initial goal for Language Translation was "fully-automatic high-quality translation" (FAHOT).

  19. Agile Multi-Scale Decompositions for Automatic Image Registration

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline

    2016-01-01

    In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.

  20. ARES v2: new features and improved performance

    NASA Astrophysics Data System (ADS)

    Sousa, S. G.; Santos, N. C.; Adibekyan, V.; Delgado-Mena, E.; Israelian, G.

    2015-05-01

    Aims: We present a new upgraded version of ARES. The new version includes a series of interesting new features such as automatic radial velocity correction, a fully automatic continuum determination, and an estimation of the errors for the equivalent widths. Methods: The automatic correction of the radial velocity is achieved with a simple cross-correlation function, and the automatic continuum determination, as well as the estimation of the errors, relies on a new approach to evaluating the spectral noise at the continuum level. Results: ARES v2 is totally compatible with its predecessor. We show that the fully automatic continuum determination is consistent with the previous methods applied for this task. It also presents a significant improvement on its performance thanks to the implementation of a parallel computation using the OpenMP library. Automatic Routine for line Equivalent widths in stellar Spectra - ARES webpage: http://www.astro.up.pt/~sousasag/ares/Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 075.D-0800(A).

  1. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection.

    PubMed

    Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George

    2017-06-26

    We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.

  2. A hybrid 3D region growing and 4D curvature analysis-based automatic abdominal blood vessel segmentation through contrast enhanced CT

    NASA Astrophysics Data System (ADS)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen

    2017-03-01

    In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.

  3. An automatic multi-atlas prostate segmentation in MRI using a multiscale representation and a label fusion strategy

    NASA Astrophysics Data System (ADS)

    Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo

    2015-01-01

    The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.

  4. AIRSAR Web-Based Data Processing

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne

    2007-01-01

    The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.

  5. Active optics - The NTT and the future

    NASA Astrophysics Data System (ADS)

    Wilson, R. N.; Franza, F.; Giordano, P.; Noethe, L.; Tarenghi, M.

    1988-09-01

    An account is given of the essential design features and advantages of the ESO's NTT system optics, constituting an active telescope in which the optical correction process exhibited in histograms can be performed at will, on-line, so that the intrinsic quality of the telescope can be fully realized. This technology allows the relaxation of low spatial frequency (long-wave) manufacturing tolerances, and accomplishes automatic maintenance with respect to errors due to optics' maladjustment. Linearity, convergence, and orthogonality laws are used by the optical correction process algorithm.

  6. Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems.

    PubMed

    Zerrouki, Taha; Balla, Amar

    2017-04-01

    Arabic diacritics are often missed in Arabic scripts. This feature is a handicap for new learner to read َArabic, text to speech conversion systems, reading and semantic analysis of Arabic texts. The automatic diacritization systems are the best solution to handle this issue. But such automation needs resources as diactritized texts to train and evaluate such systems. In this paper, we describe our corpus of Arabic diacritized texts. This corpus is called Tashkeela. It can be used as a linguistic resource tool for natural language processing such as automatic diacritics systems, dis-ambiguity mechanism, features and data extraction. The corpus is freely available, it contains 75 million of fully vocalized words mainly 97 books from classical and modern Arabic language. The corpus is collected from manually vocalized texts using web crawling process.

  7. Is place-value processing in four-digit numbers fully automatic? Yes, but not always.

    PubMed

    García-Orza, Javier; Estudillo, Alejandro J; Calleja, Marina; Rodríguez, José Miguel

    2017-12-01

    Knowing the place-value of digits in multi-digit numbers allows us to identify, understand and distinguish between numbers with the same digits (e.g., 1492 vs. 1942). Research using the size congruency task has shown that the place-value in a string of three zeros and a non-zero digit (e.g., 0090) is processed automatically. In the present study, we explored whether place-value is also automatically activated when more complex numbers (e.g., 2795) are presented. Twenty-five participants were exposed to pairs of four-digit numbers that differed regarding the position of some digits and their physical size. Participants had to decide which of the two numbers was presented in a larger font size. In the congruent condition, the number shown in a bigger font size was numerically larger. In the incongruent condition, the number shown in a smaller font size was numerically larger. Two types of numbers were employed: numbers composed of three zeros and one non-zero digit (e.g., 0040-0400) and numbers composed of four non-zero digits (e.g., 2795-2759). Results showed larger congruency effects in more distant pairs in both type of numbers. Interestingly, this effect was considerably stronger in the strings composed of zeros. These results indicate that place-value coding is partially automatic, as it depends on the perceptual and numerical properties of the numbers to be processed.

  8. Automatic differential analysis of NMR experiments in complex samples.

    PubMed

    Margueritte, Laure; Markov, Petar; Chiron, Lionel; Starck, Jean-Philippe; Vonthron-Sénécheau, Catherine; Bourjot, Mélanie; Delsuc, Marc-André

    2018-06-01

    Liquid state nuclear magnetic resonance (NMR) is a powerful tool for the analysis of complex mixtures of unknown molecules. This capacity has been used in many analytical approaches: metabolomics, identification of active compounds in natural extracts, and characterization of species, and such studies require the acquisition of many diverse NMR measurements on series of samples. Although acquisition can easily be performed automatically, the number of NMR experiments involved in these studies increases very rapidly, and this data avalanche requires to resort to automatic processing and analysis. We present here a program that allows the autonomous, unsupervised processing of a large corpus of 1D, 2D, and diffusion-ordered spectroscopy experiments from a series of samples acquired in different conditions. The program provides all the signal processing steps, as well as peak-picking and bucketing of 1D and 2D spectra, the program and its components are fully available. In an experiment mimicking the search of a bioactive species in a natural extract, we use it for the automatic detection of small amounts of artemisinin added to a series of plant extracts and for the generation of the spectral fingerprint of this molecule. This program called Plasmodesma is a novel tool that should be useful to decipher complex mixtures, particularly in the discovery of biologically active natural products from plants extracts but can also in drug discovery or metabolomics studies. Copyright © 2017 John Wiley & Sons, Ltd.

  9. fMRat: an extension of SPM for a fully automatic analysis of rodent brain functional magnetic resonance series.

    PubMed

    Chavarrías, Cristina; García-Vázquez, Verónica; Alemán-Gómez, Yasser; Montesinos, Paula; Pascau, Javier; Desco, Manuel

    2016-05-01

    The purpose of this study was to develop a multi-platform automatic software tool for full processing of fMRI rodent studies. Existing tools require the usage of several different plug-ins, a significant user interaction and/or programming skills. Based on a user-friendly interface, the tool provides statistical parametric brain maps (t and Z) and percentage of signal change for user-provided regions of interest. The tool is coded in MATLAB (MathWorks(®)) and implemented as a plug-in for SPM (Statistical Parametric Mapping, the Wellcome Trust Centre for Neuroimaging). The automatic pipeline loads default parameters that are appropriate for preclinical studies and processes multiple subjects in batch mode (from images in either Nifti or raw Bruker format). In advanced mode, all processing steps can be selected or deselected and executed independently. Processing parameters and workflow were optimized for rat studies and assessed using 460 male-rat fMRI series on which we tested five smoothing kernel sizes and three different hemodynamic models. A smoothing kernel of FWHM = 1.2 mm (four times the voxel size) yielded the highest t values at the somatosensorial primary cortex, and a boxcar response function provided the lowest residual variance after fitting. fMRat offers the features of a thorough SPM-based analysis combined with the functionality of several SPM extensions in a single automatic pipeline with a user-friendly interface. The code and sample images can be downloaded from https://github.com/HGGM-LIM/fmrat .

  10. Phase-amplitude imaging: its application to fully automated analysis of magnetic field measurements in laser-produced plasmas.

    PubMed

    Kalal, M; Nugent, K A; Luther-Davies, B

    1987-05-01

    An interferometric technique which enables simultaneous phase and amplitude imaging of optically transparent objects is discussed with respect to its application for the measurement of spontaneous toroidal magnetic fields generated in laser-produced plasmas. It is shown that this technique can replace the normal independent pair of optical systems (interferometry and shadowgraphy) by one system and use computer image processing to recover both the plasma density and magnetic field information with high accuracy. A fully automatic algorithm for the numerical analysis of the data has been developed and its performance demonstrated for the case of simulated as well as experimental data.

  11. Phase-amplitude imaging: its application to fully automated analysis of magnetic field measurements in laser-produced plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalal, M.; Nugent, K.A.; Luther-Davies, B.

    1987-05-01

    An interferometric technique which enables simultaneous phase and amplitude imaging of optically transparent objects is discussed with respect to its application for the measurement of spontaneous toroidal magnetic fields generated in laser-produced plasmas. It is shown that this technique can replace the normal independent pair of optical systems (interferometry and shadowgraphy) by one system and use computer image processing to recover both the plasma density and magnetic field information with high accuracy. A fully automatic algorithm for the numerical analysis of the data has been developed and its performance demonstrated for the case of simulated as well as experimental data.

  12. Fully automatic segmentation of femurs with medullary canal definition in high and in low resolution CT scans.

    PubMed

    Almeida, Diogo F; Ruben, Rui B; Folgado, João; Fernandes, Paulo R; Audenaert, Emmanuel; Verhegghe, Benedict; De Beule, Matthieu

    2016-12-01

    Femur segmentation can be an important tool in orthopedic surgical planning. However, in order to overcome the need of an experienced user with extensive knowledge on the techniques, segmentation should be fully automatic. In this paper a new fully automatic femur segmentation method for CT images is presented. This method is also able to define automatically the medullary canal and performs well even in low resolution CT scans. Fully automatic femoral segmentation was performed adapting a template mesh of the femoral volume to medical images. In order to achieve this, an adaptation of the active shape model (ASM) technique based on the statistical shape model (SSM) and local appearance model (LAM) of the femur with a novel initialization method was used, to drive the template mesh deformation in order to fit the in-image femoral shape in a time effective approach. With the proposed method a 98% convergence rate was achieved. For high resolution CT images group the average error is less than 1mm. For the low resolution image group the results are also accurate and the average error is less than 1.5mm. The proposed segmentation pipeline is accurate, robust and completely user free. The method is robust to patient orientation, image artifacts and poorly defined edges. The results excelled even in CT images with a significant slice thickness, i.e., above 5mm. Medullary canal segmentation increases the geometric information that can be used in orthopedic surgical planning or in finite element analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Fast automatic segmentation of anatomical structures in x-ray computed tomography images to improve fluorescence molecular tomography reconstruction.

    PubMed

    Freyer, Marcus; Ale, Angelique; Schulz, Ralf B; Zientkowska, Marta; Ntziachristos, Vasilis; Englmeier, Karl-Hans

    2010-01-01

    The recent development of hybrid imaging scanners that integrate fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) allows the utilization of x-ray information as image priors for improving optical tomography reconstruction. To fully capitalize on this capacity, we consider a framework for the automatic and fast detection of different anatomic structures in murine XCT images. To accurately differentiate between different structures such as bone, lung, and heart, a combination of image processing steps including thresholding, seed growing, and signal detection are found to offer optimal segmentation performance. The algorithm and its utilization in an inverse FMT scheme that uses priors is demonstrated on mouse images.

  14. Gap-free segmentation of vascular networks with automatic image processing pipeline.

    PubMed

    Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas

    2017-03-01

    Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Automatic neutron dosimetry system based on fluorescent nuclear track detector technology.

    PubMed

    Akselrod, M S; Fomenko, V V; Bartz, J A; Haslett, T L

    2014-10-01

    For the first time, the authors are describing an automatic fluorescent nuclear track detector (FNTD) reader for neutron dosimetry. FNTD is a luminescent integrating type of detector made of aluminium oxide crystals that does not require electronics or batteries during irradiation. Non-destructive optical readout of the detector is performed using a confocal laser scanning fluorescence imaging with near-diffraction limited resolution. The fully automatic table-top reader allows one to load up to 216 detectors on a tray, read their engraved IDs using a CCD camera and optical character recognition, scan and process simultaneously two types of images in fluorescent and reflected laser light contrast to eliminate false-positive tracks related to surface and volume crystal imperfections. The FNTD dosimetry system allows one to measure neutron doses from 0.1 mSv to 20 Sv and covers neutron energies from thermal to 20 MeV. The reader is characterised by a robust, compact optical design, fast data processing electronics and user-friendly software. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Efficient content-based low-altitude images correlated network and strips reconstruction

    NASA Astrophysics Data System (ADS)

    He, Haiqing; You, Qi; Chen, Xiaoyong

    2017-01-01

    The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.

  17. Optimization and automation of quantitative NMR data extraction.

    PubMed

    Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos

    2013-06-18

    NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.

  18. An automatic rat brain extraction method based on a deformable surface model.

    PubMed

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Automatic systems and the low-level wind hazard

    NASA Technical Reports Server (NTRS)

    Schaeffer, Dwight R.

    1987-01-01

    Automatic flight control systems provide means for significantly enhancing survivability in severe wind hazards. The technology required to produce the necessary control algorithms is available and has been made technically feasible by the advent of digital flight control systems and accurate, low-noise sensors, especially strap-down inertial sensors. The application of this technology and these means has not generally been enabled except for automatic landing systems, and even then the potential has not been fully exploited. To fully exploit the potential of automatic systems for enhancing safety in wind hazards requires providing incentives, creating demand, inspiring competition, education, and eliminating prejudicial disincentitives to overcome the economic penalties associated with the extensive and riskly development and certification of these systems. If these changes will come about at all, it will likely be through changes in the regulations provided by the certifying agencies.

  20. [The mediating role of anger in the relationship between automatic thoughts and physical aggression in adolescents].

    PubMed

    Yavuzer, Yasemin; Karataş, Zeynep

    2013-01-01

    This study aimed to examine the mediating role of anger in the relationship between automatic thoughts and physical aggression in adolescents. The study included 224 adolescents in the 9th grade of 3 different high schools in central Burdur during the 2011-2012 academic year. Participants completed the Aggression Questionnaire and Automatic Thoughts Scale in their classrooms during counseling sessions. Data were analyzed using simple and multiple linear regression analysis. There were positive correlations between the adolescents' automatic thoughts, and physical aggression, and anger. According to regression analysis, automatic thoughts effectively predicted the level of physical aggression (b= 0.233, P < 0.001)) and anger (b= 0.325, P < 0.001). Analysis of the mediating role of anger showed that anger fully mediated the relationship between automatic thoughts and physical aggression (Sobel z = 5.646, P < 0.001). Anger fully mediated the relationship between automatic thoughts and physical aggression. Providing adolescents with anger management skills training is very important for the prevention of physical aggression. Such training programs should include components related to the development of an awareness of dysfunctional and anger-triggering automatic thoughts, and how to change them. As the study group included adolescents from Burdur, the findings can only be generalized to groups with similar characteristics.

  1. EARLINET Single Calculus Chain - overview on methodology and strategy

    NASA Astrophysics Data System (ADS)

    D'Amico, G.; Amodeo, A.; Baars, H.; Binietoglou, I.; Freudenthaler, V.; Mattis, I.; Wandinger, U.; Pappalardo, G.

    2015-11-01

    In this paper we describe the EARLINET Single Calculus Chain (SCC), a tool for the automatic analysis of lidar measurements. The development of this tool started in the framework of EARLINET-ASOS (European Aerosol Research Lidar Network - Advanced Sustainable Observation System); it was extended within ACTRIS (Aerosol, Clouds and Trace gases Research InfraStructure Network), and it is continuing within ACTRIS-2. The main idea was to develop a data processing chain that allows all EARLINET stations to retrieve, in a fully automatic way, the aerosol backscatter and extinction profiles starting from the raw lidar data of the lidar systems they operate. The calculus subsystem of the SCC is composed of two modules: a pre-processor module which handles the raw lidar data and corrects them for instrumental effects and an optical processing module for the retrieval of aerosol optical products from the pre-processed data. All input parameters needed to perform the lidar analysis are stored in a database to keep track of all changes which may occur for any EARLINET lidar system over the time. The two calculus modules are coordinated and synchronized by an additional module (daemon) which makes the whole analysis process fully automatic. The end user can interact with the SCC via a user-friendly web interface. All SCC modules are developed using open-source and freely available software packages. The final products retrieved by the SCC fulfill all requirements of the EARLINET quality assurance programs on both instrumental and algorithm levels. Moreover, the manpower needed to provide aerosol optical products is greatly reduced and thus the near-real-time availability of lidar data is improved. The high-quality of the SCC products is proven by the good agreement between the SCC analysis, and the corresponding independent manual retrievals. Finally, the ability of the SCC to provide high-quality aerosol optical products is demonstrated for an EARLINET intense observation period.

  2. Fully automatic left ventricular myocardial strain estimation in 2D short-axis tagged magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Morais, Pedro; Queirós, Sandro; Heyde, Brecht; Engvall, Jan; 'hooge, Jan D.; Vilaça, João L.

    2017-09-01

    Cardiovascular diseases are among the leading causes of death and frequently result in local myocardial dysfunction. Among the numerous imaging modalities available to detect these dysfunctional regions, cardiac deformation imaging through tagged magnetic resonance imaging (t-MRI) has been an attractive approach. Nevertheless, fully automatic analysis of these data sets is still challenging. In this work, we present a fully automatic framework to estimate left ventricular myocardial deformation from t-MRI. This strategy performs automatic myocardial segmentation based on B-spline explicit active surfaces, which are initialized using an annular model. A non-rigid image-registration technique is then used to assess myocardial deformation. Three experiments were set up to validate the proposed framework using a clinical database of 75 patients. First, automatic segmentation accuracy was evaluated by comparing against manual delineations at one specific cardiac phase. The proposed solution showed an average perpendicular distance error of 2.35  ±  1.21 mm and 2.27  ±  1.02 mm for the endo- and epicardium, respectively. Second, starting from either manual or automatic segmentation, myocardial tracking was performed and the resulting strain curves were compared. It is shown that the automatic segmentation adds negligible differences during the strain-estimation stage, corroborating its accuracy. Finally, segmental strain was compared with scar tissue extent determined by delay-enhanced MRI. The results proved that both strain components were able to distinguish between normal and infarct regions. Overall, the proposed framework was shown to be accurate, robust, and attractive for clinical practice, as it overcomes several limitations of a manual analysis.

  3. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.

    2016-04-01

    Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched field processing.

  4. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images.

    PubMed

    Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-05-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.

  5. Automation of image data processing. (Polish Title: Automatyzacja proces u przetwarzania danych obrazowych)

    NASA Astrophysics Data System (ADS)

    Preuss, R.

    2014-12-01

    This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system

  6. Automatic soldering machine

    NASA Technical Reports Server (NTRS)

    Stein, J. A.

    1974-01-01

    Fully-automatic tube-joint soldering machine can be used to make leakproof joints in aluminum tubes of 3/16 to 2 in. in diameter. Machine consists of temperature-control unit, heater transformer and heater head, vibrator, and associated circuitry controls, and indicators.

  7. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciller, Carlos, E-mail: carlos.cillerruiz@unil.ch; Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern; Centre d’Imagerie BioMédicale, University of Lausanne, Lausanne

    Purpose: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Methods and Materials: Manualmore » and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.« less

  8. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma.

    PubMed

    Ciller, Carlos; De Zanet, Sandro I; Rüegsegger, Michael B; Pica, Alessia; Sznitman, Raphael; Thiran, Jean-Philippe; Maeder, Philippe; Munier, Francis L; Kowal, Jens H; Cuadra, Meritxell Bach

    2015-07-15

    Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance.

    PubMed

    Yuan, Yading; Chao, Ming; Lo, Yeh-Chi

    2017-09-01

    Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.

  10. Quantification of common carotid artery and descending aorta vessel wall thickness from MR vessel wall imaging using a fully automated processing pipeline.

    PubMed

    Gao, Shan; van 't Klooster, Ronald; Brandts, Anne; Roes, Stijntje D; Alizadeh Dehnavi, Reza; de Roos, Albert; Westenberg, Jos J M; van der Geest, Rob J

    2017-01-01

    To develop and evaluate a method that can fully automatically identify the vessel wall boundaries and quantify the wall thickness for both common carotid artery (CCA) and descending aorta (DAO) from axial magnetic resonance (MR) images. 3T MRI data acquired with T 1 -weighted gradient-echo black-blood imaging sequence from carotid (39 subjects) and aorta (39 subjects) were used to develop and test the algorithm. The vessel wall segmentation was achieved by respectively fitting a 3D cylindrical B-spline surface to the boundaries of lumen and outer wall. The tube-fitting was based on the edge detection performed on the signal intensity (SI) profile along the surface normal. To achieve a fully automated process, Hough Transform (HT) was developed to estimate the lumen centerline and radii for the target vessel. Using the outputs of HT, a tube model for lumen segmentation was initialized and deformed to fit the image data. Finally, lumen segmentation was dilated to initiate the adaptation procedure of outer wall tube. The algorithm was validated by determining: 1) its performance against manual tracing; 2) its interscan reproducibility in quantifying vessel wall thickness (VWT); 3) its capability of detecting VWT difference in hypertensive patients compared with healthy controls. Statistical analysis including Bland-Altman analysis, t-test, and sample size calculation were performed for the purpose of algorithm evaluation. The mean distance between the manual and automatically detected lumen/outer wall contours was 0.00 ± 0.23/0.09 ± 0.21 mm for CCA and 0.12 ± 0.24/0.14 ± 0.35 mm for DAO. No significant difference was observed between the interscan VWT assessment using automated segmentation for both CCA (P = 0.19) and DAO (P = 0.94). Both manual and automated segmentation detected significantly higher carotid (P = 0.016 and P = 0.005) and aortic (P < 0.001 and P = 0.021) wall thickness in the hypertensive patients. A reliable and reproducible pipeline for fully automatic vessel wall quantification was developed and validated on healthy volunteers as well as patients with increased vessel wall thickness. This method holds promise for helping in efficient image interpretation for large-scale cohort studies. 4 J. Magn. Reson. Imaging 2017;45:215-228. © 2016 International Society for Magnetic Resonance in Medicine.

  11. A semi-automatic annotation tool for cooking video

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe

    2013-03-01

    In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

  12. A quality score for coronary artery tree extraction results

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2018-02-01

    Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.

  13. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    NASA Astrophysics Data System (ADS)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  14. Automatic short axis orientation of the left ventricle in 3D ultrasound recordings

    NASA Astrophysics Data System (ADS)

    Pedrosa, João.; Heyde, Brecht; Heeren, Laurens; Engvall, Jan; Zamorano, Jose; Papachristidis, Alexandros; Edvardsen, Thor; Claus, Piet; D'hooge, Jan

    2016-04-01

    The recent advent of three-dimensional echocardiography has led to an increased interest from the scientific community in left ventricle segmentation frameworks for cardiac volume and function assessment. An automatic orientation of the segmented left ventricular mesh is an important step to obtain a point-to-point correspondence between the mesh and the cardiac anatomy. Furthermore, this would allow for an automatic division of the left ventricle into the standard 17 segments and, thus, fully automatic per-segment analysis, e.g. regional strain assessment. In this work, a method for fully automatic short axis orientation of the segmented left ventricle is presented. The proposed framework aims at detecting the inferior right ventricular insertion point. 211 three-dimensional echocardiographic images were used to validate this framework by comparison to manual annotation of the inferior right ventricular insertion point. A mean unsigned error of 8, 05° +/- 18, 50° was found, whereas the mean signed error was 1, 09°. Large deviations between the manual and automatic annotations (> 30°) only occurred in 3, 79% of cases. The average computation time was 666ms in a non-optimized MATLAB environment, which potentiates real-time application. In conclusion, a successful automatic real-time method for orientation of the segmented left ventricle is proposed.

  15. The design of an irradiator for the continuous processing of liquid latex

    NASA Astrophysics Data System (ADS)

    Reuter, O.; Langley, R.; Zn, Wan Manshol Bin W.

    1998-06-01

    This paper presents anew design concept for a gamma irradiation plant for the continuous processing of pumpable liquids. Typical applications of such a plant include ∗ the irradiation vulcanisation of natural latex rubber ∗ disinfection of municipal sewage sludge for agricultural use ∗ sterilisation of liquids in the pharmaceutical and cosmetics industries ∗ industrial processing of bulk liquids The authors describe the design and operation of the latex irradiator now operating on a small production scale in Malaysia and proposed developments. The design allows irradiation processing to be carried out under an inert or other gaseous environment. State-of-the-art computer control system ensures the fully automatic processing operation needed by industrial computers.

  16. Controlling the COD removal of an A-stage pilot study with instrumentation and automatic process control.

    PubMed

    Miller, Mark W; Elliott, Matt; DeArmond, Jon; Kinyua, Maureen; Wett, Bernhard; Murthy, Sudhir; Bott, Charles B

    2017-06-01

    The pursuit of fully autotrophic nitrogen removal via the anaerobic ammonium oxidation (anammox) pathway has led to an increased interest in carbon removal technologies, particularly the A-stage of the adsorption/bio-oxidation (A/B) process. The high-rate operation of the A-stage and lack of automatic process control often results in wide variations of chemical oxygen demand (COD) removal that can ultimately impact nitrogen removal in the downstream B-stage process. This study evaluated the use dissolved oxygen (DO) and mixed liquor suspended solids (MLSS) based automatic control strategies through the use of in situ on-line sensors in the A-stage of an A/B pilot study. The objective of using these control strategies was to reduce the variability of COD removal by the A-stage and thus the variability of the effluent C/N. The use of cascade DO control in the A-stage did not impact COD removal at the conditions tested in this study, likely because the bulk DO concentration (>0.5 mg/L) was maintained above the half saturation coefficient of heterotrophic organisms for DO. MLSS-based solids retention time (SRT) control, where MLSS was used as a surrogate for SRT, did not significantly reduce the effluent C/N variability but it was able to reduce COD removal variation in the A-stage by 90%.

  17. Embodiment and second-language: automatic activation of motor responses during processing spatially associated L2 words and emotion L2 words in a vertical Stroop paradigm.

    PubMed

    Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara

    2014-05-01

    Converging evidence suggests that understanding our first-language (L1) results in reactivation of experiential sensorimotor traces in the brain. Surprisingly, little is known regarding the involvement of these processes during second-language (L2) processing. Participants saw L1 or L2 words referring to entities with a typical location (e.g., star, mole) (Experiment 1 & 2) or to an emotion (e.g., happy, sad) (Experiment 3). Participants responded to the words' ink color with an upward or downward arm movement. Despite word meaning being fully task-irrelevant, L2 automatically activated motor responses similar to L1 even when L2 was acquired rather late in life (age >11). Specifically, words such as star facilitated upward, and words such as root facilitated downward responses. Additionally, words referring to positive emotions facilitated upward, and words referring to negative emotions facilitated downward responses. In summary our study suggests that reactivation of experiential traces is not limited to L1 processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Automated tilt series alignment and tomographic reconstruction in IMOD.

    PubMed

    Mastronarde, David N; Held, Susannah R

    2017-02-01

    Automated tomographic reconstruction is now possible in the IMOD software package, including the merging of tomograms taken around two orthogonal axes. Several developments enable the production of high-quality tomograms. When using fiducial markers for alignment, the markers to be tracked through the series are chosen automatically; if there is an excess of markers available, a well-distributed subset is selected that is most likely to track well. Marker positions are refined by applying an edge-enhancing Sobel filter, which results in a 20% improvement in alignment error for plastic-embedded samples and 10% for frozen-hydrated samples. Robust fitting, in which outlying points are given less or no weight in computing the fitting error, is used to obtain an alignment solution, so that aberrant points from the automated tracking can have little effect on the alignment. When merging two dual-axis tomograms, the alignment between them is refined from correlations between local patches; a measure of structure was developed so that patches with insufficient structure to give accurate correlations can now be excluded automatically. We have also developed a script for running all steps in the reconstruction process with a flexible mechanism for setting parameters, and we have added a user interface for batch processing of tilt series to the Etomo program in IMOD. Batch processing is fully compatible with interactive processing and can increase efficiency even when the automation is not fully successful, because users can focus their effort on the steps that require manual intervention. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    PubMed

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Automated contour detection in X-ray left ventricular angiograms using multiview active appearance models and dynamic programming.

    PubMed

    Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2006-09-01

    This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.

  1. Automatic bladder segmentation from CT images using deep CNN and 3D fully connected CRF-RNN.

    PubMed

    Xu, Xuanang; Zhou, Fugen; Liu, Bo

    2018-03-19

    Automatic approach for bladder segmentation from computed tomography (CT) images is highly desirable in clinical practice. It is a challenging task since the bladder usually suffers large variations of appearance and low soft-tissue contrast in CT images. In this study, we present a deep learning-based approach which involves a convolutional neural network (CNN) and a 3D fully connected conditional random fields recurrent neural network (CRF-RNN) to perform accurate bladder segmentation. We also propose a novel preprocessing method, called dual-channel preprocessing, to further advance the segmentation performance of our approach. The presented approach works as following: first, we apply our proposed preprocessing method on the input CT image and obtain a dual-channel image which consists of the CT image and an enhanced bladder density map. Second, we exploit a CNN to predict a coarse voxel-wise bladder score map on this dual-channel image. Finally, a 3D fully connected CRF-RNN refines the coarse bladder score map and produce final fine-localized segmentation result. We compare our approach to the state-of-the-art V-net on a clinical dataset. Results show that our approach achieves superior segmentation accuracy, outperforming the V-net by a significant margin. The Dice Similarity Coefficient of our approach (92.24%) is 8.12% higher than that of the V-net. Moreover, the bladder probability maps performed by our approach present sharper boundaries and more accurate localizations compared with that of the V-net. Our approach achieves higher segmentation accuracy than the state-of-the-art method on clinical data. Both the dual-channel processing and the 3D fully connected CRF-RNN contribute to this improvement. The united deep network composed of the CNN and 3D CRF-RNN also outperforms a system where the CRF model acts as a post-processing method disconnected from the CNN.

  2. Toward fully automated processing of dynamic susceptibility contrast perfusion MRI for acute ischemic cerebral stroke.

    PubMed

    Kim, Jinsuh; Leira, Enrique C; Callison, Richard C; Ludwig, Bryan; Moritani, Toshio; Magnotta, Vincent A; Madsen, Mark T

    2010-05-01

    We developed fully automated software for dynamic susceptibility contrast (DSC) MR perfusion-weighted imaging (PWI) to efficiently and reliably derive critical hemodynamic information for acute stroke treatment decisions. Brain MR PWI was performed in 80 consecutive patients with acute nonlacunar ischemic stroke within 24h after onset of symptom from January 2008 to August 2009. These studies were automatically processed to generate hemodynamic parameters that included cerebral blood flow and cerebral blood volume, and the mean transit time (MTT). To develop reliable software for PWI analysis, we used computationally robust algorithms including the piecewise continuous regression method to determine bolus arrival time (BAT), log-linear curve fitting, arrival time independent deconvolution method and sophisticated motion correction methods. An optimal arterial input function (AIF) search algorithm using a new artery-likelihood metric was also developed. Anatomical locations of the automatically determined AIF were reviewed and validated. The automatically computed BAT values were statistically compared with estimated BAT by a single observer. In addition, gamma-variate curve-fitting errors of AIF and inter-subject variability of AIFs were analyzed. Lastly, two observes independently assessed the quality and area of hypoperfusion mismatched with restricted diffusion area from motion corrected MTT maps and compared that with time-to-peak (TTP) maps using the standard approach. The AIF was identified within an arterial branch and enhanced areas of perfusion deficit were visualized in all evaluated cases. Total processing time was 10.9+/-2.5s (mean+/-s.d.) without motion correction and 267+/-80s (mean+/-s.d.) with motion correction on a standard personal computer. The MTT map produced with our software adequately estimated brain areas with perfusion deficit and was significantly less affected by random noise of the PWI when compared with the TTP map. Results of image quality assessment by two observers revealed that the MTT maps exhibited superior quality over the TTP maps (88% good rating of MTT as compared to 68% of TTP). Our software allowed fully automated deconvolution analysis of DSC PWI using proven efficient algorithms that can be applied to acute stroke treatment decisions. Our streamlined method also offers promise for further development of automated quantitative analysis of the ischemic penumbra. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  3. Automatic switching matrix

    DOEpatents

    Schlecht, Martin F.; Kassakian, John G.; Caloggero, Anthony J.; Rhodes, Bruce; Otten, David; Rasmussen, Neil

    1982-01-01

    An automatic switching matrix that includes an apertured matrix board containing a matrix of wires that can be interconnected at each aperture. Each aperture has associated therewith a conductive pin which, when fully inserted into the associated aperture, effects electrical connection between the wires within that particular aperture. Means is provided for automatically inserting the pins in a determined pattern and for removing all the pins to permit other interconnecting patterns.

  4. On the possibility of producing definitive magnetic observatory data within less than one year

    NASA Astrophysics Data System (ADS)

    Mandić, Igor; Korte, Monika

    2017-04-01

    Geomagnetic observatory data are fundamental in geomagnetic field studies and are widely used in other applications. Often they are combined with satellite and ground survey data. Unfortunately, the observatory definitive data are only available with a time lag ranging from several months up to more than a year. The reason for this lag is the annual production of the final calibration values, i.e. baselines that are used to correct preliminary data from continuously recording magnetometers. In this paper, we will show that the preparation of definitive geomagnetic data is possible within a calendar year and presents an original method for prompt and automatic estimation of the observatory baselines. The new baselines, obtained in a mostly automatic manner, are compared with the baselines reported on INTERMAGNET DVDs for the 2009-2011 period. The high quality of the baselines obtained by the proposed method indicates its suitability for data processing in fully automatic observatories when automated absolute instruments will be deployed at remote sites.

  5. Review of Findings for Human Performance Contribution to Risk in Operating Events

    DTIC Science & Technology

    2002-03-01

    and loss of DC power. Key to this event was failure to control setpoints on safety-related equipment and failure to maintain the load tap changer...34 Therefore, "to optimize task execution at the job site, it is important to align organizational processes and values." Effective team skills are an...reactor was blocked and the water level rapidly dropped to the automatic low-level scram setpoint . Human Performance Issues Control rods were fully

  6. The Study of Cognitive Change Process on Depression during Aerobic Exercises.

    PubMed

    Sadeghi, Kheirollah; Ahmadi, Seyed Mojtaba; Moghadam, Arash Parsa; Parvizifard, Aliakbar

    2017-04-01

    Several studies have shown that aerobic exercise is effective in treating the depression and improving the mental health. There are various theories which explains why aerobic exercise is effective in the treatment of depression and improve mental health, but there are limited studies to show how cognitive components and depression improve during aerobic exercises. The current study was carried out to investigate the cognitive change process during aerobic exercises in depressed students. This study was conducted through structural equation modeling; the study sample included 85 depressed students. Participants were selected through purposive sampling method. Beck Depression Inventory (BDI-II), Automatic Negative Thoughts (ATQ), and the Dysfunctional Attitude Scale (DAS) were used as the data collection instruments. The participants received eight sessions of aerobic exercise (three times a week). The obtained data was analysed by AMOS-18 & SPSS 18 software. The results showed that depression (p=0.001), automatic thoughts (ferquency p=0.413, beliefs p=0.676) and dysfunctional assumptions (p=0.219) reduce during aerobic exercise; however, it was only meaningful for the depression. The casual and consequential models were not fit to the data and partially and fully interactive models provided an adequate fit to the data. Fully interactive model provided the best fit of the data. It seems that aerobic exercise reduced cognitive components separately leading to reduce depression.

  7. "Rate My Therapist": Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing.

    PubMed

    Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis G; Atkins, David C; Narayanan, Shrikanth S

    2015-01-01

    The technology for evaluating patient-provider interactions in psychotherapy-observational coding-has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies.

  8. Automatic Sea Bird Detection from High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Mader, S.; Grenzdörffer, G. J.

    2016-06-01

    Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.

  9. Fully automatic lesion segmentation in breast MRI using mean-shift and graph-cuts on a region adjacency graph.

    PubMed

    McClymont, Darryl; Mehnert, Andrew; Trakic, Adnan; Kennedy, Dominic; Crozier, Stuart

    2014-04-01

    To present and evaluate a fully automatic method for segmentation (i.e., detection and delineation) of suspicious tissue in breast MRI. The method, based on mean-shift clustering and graph-cuts on a region adjacency graph, was developed and its parameters tuned using multimodal (T1, T2, DCE-MRI) clinical breast MRI data from 35 subjects (training data). It was then tested using two data sets. Test set 1 comprises data for 85 subjects (93 lesions) acquired using the same protocol and scanner system used to acquire the training data. Test set 2 comprises data for eight subjects (nine lesions) acquired using a similar protocol but a different vendor's scanner system. Each lesion was manually delineated in three-dimensions by an experienced breast radiographer to establish segmentation ground truth. The regions of interest identified by the method were compared with the ground truth and the detection and delineation accuracies quantitatively evaluated. One hundred percent of the lesions were detected with a mean of 4.5 ± 1.2 false positives per subject. This false-positive rate is nearly 50% better than previously reported for a fully automatic breast lesion detection system. The median Dice coefficient for Test set 1 was 0.76 (interquartile range, 0.17), and 0.75 (interquartile range, 0.16) for Test set 2. The results demonstrate the efficacy and accuracy of the proposed method as well as its potential for direct application across different MRI systems. It is (to the authors' knowledge) the first fully automatic method for breast lesion detection and delineation in breast MRI.

  10. Universal microfluidic automaton for autonomous sample processing: application to the Mars Organic Analyzer.

    PubMed

    Kim, Jungkyu; Jensen, Erik C; Stockton, Amanda M; Mathies, Richard A

    2013-08-20

    A fully integrated multilayer microfluidic chemical analyzer for automated sample processing and labeling, as well as analysis using capillary zone electrophoresis is developed and characterized. Using lifting gate microfluidic control valve technology, a microfluidic automaton consisting of a two-dimensional microvalve cellular array is fabricated with soft lithography in a format that enables facile integration with a microfluidic capillary electrophoresis device. The programmable sample processor performs precise mixing, metering, and routing operations that can be combined to achieve automation of complex and diverse assay protocols. Sample labeling protocols for amino acid, aldehyde/ketone and carboxylic acid analysis are performed automatically followed by automated transfer and analysis by the integrated microfluidic capillary electrophoresis chip. Equivalent performance to off-chip sample processing is demonstrated for each compound class; the automated analysis resulted in a limit of detection of ~16 nM for amino acids. Our microfluidic automaton provides a fully automated, portable microfluidic analysis system capable of autonomous analysis of diverse compound classes in challenging environments.

  11. A fully automatic three-step liver segmentation method on LDA-based probability maps for multiple contrast MR images.

    PubMed

    Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf

    2010-07-01

    Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.

  12. Comparison between manual and semi-automatic segmentation of nasal cavity and paranasal sinuses from CT images.

    PubMed

    Tingelhoff, K; Moral, A I; Kunkel, M E; Rilk, M; Wagner, I; Eichhorn, K G; Wahl, F M; Bootz, F

    2007-01-01

    Segmentation of medical image data is getting more and more important over the last years. The results are used for diagnosis, surgical planning or workspace definition of robot-assisted systems. The purpose of this paper is to find out whether manual or semi-automatic segmentation is adequate for ENT surgical workflow or whether fully automatic segmentation of paranasal sinuses and nasal cavity is needed. We present a comparison of manual and semi-automatic segmentation of paranasal sinuses and the nasal cavity. Manual segmentation is performed by custom software whereas semi-automatic segmentation is realized by a commercial product (Amira). For this study we used a CT dataset of the paranasal sinuses which consists of 98 transversal slices, each 1.0 mm thick, with a resolution of 512 x 512 pixels. For the analysis of both segmentation procedures we used volume, extension (width, length and height), segmentation time and 3D-reconstruction. The segmentation time was reduced from 960 minutes with manual to 215 minutes with semi-automatic segmentation. We found highest variances segmenting nasal cavity. For the paranasal sinuses manual and semi-automatic volume differences are not significant. Dependent on the segmentation accuracy both approaches deliver useful results and could be used for e.g. robot-assisted systems. Nevertheless both procedures are not useful for everyday surgical workflow, because they take too much time. Fully automatic and reproducible segmentation algorithms are needed for segmentation of paranasal sinuses and nasal cavity.

  13. A Program Certification Assistant Based on Fully Automated Theorem Provers

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2005-01-01

    We describe a certification assistant to support formal safety proofs for programs. It is based on a graphical user interface that hides the low-level details of first-order automated theorem provers while supporting limited interactivity: it allows users to customize and control the proof process on a high level, manages the auxiliary artifacts produced during this process, and provides traceability between the proof obligations and the relevant parts of the program. The certification assistant is part of a larger program synthesis system and is intended to support the deployment of automatically generated code in safety-critical applications.

  14. Fully automatic three-dimensional visualization of intravascular optical coherence tomography images: methods and feasibility in vivo

    PubMed Central

    Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan

    2012-01-01

    Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578

  15. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    NASA Astrophysics Data System (ADS)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  16. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data.

    PubMed

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-21

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  17. The Role of Categorical Speech Perception and Phonological Processing in Familial Risk Children With and Without Dyslexia.

    PubMed

    Hakvoort, Britt; de Bree, Elise; van der Leij, Aryan; Maassen, Ben; van Setten, Ellie; Maurits, Natasha; van Zuijen, Titia L

    2016-12-01

    This study assessed whether a categorical speech perception (CP) deficit is associated with dyslexia or familial risk for dyslexia, by exploring a possible cascading relation from speech perception to phonology to reading and by identifying whether speech perception distinguishes familial risk (FR) children with dyslexia (FRD) from those without dyslexia (FRND). Data were collected from 9-year-old FRD (n = 37) and FRND (n = 41) children and age-matched controls (n = 49) on CP identification and discrimination and on the phonological processing measures rapid automatized naming, phoneme awareness, and nonword repetition. The FRD group performed more poorly on CP than the FRND and control groups. Findings on phonological processing align with the literature in that (a) phonological processing related to reading and (b) the FRD group showed the lowest phonological processing outcomes. Furthermore, CP correlated weakly with reading, but this relationship was fully mediated by rapid automatized naming. Although CP phonological skills are related to dyslexia, there was no strong evidence for a cascade from CP to phonology to reading. Deficits in CP at the behavioral level are not directly associated with dyslexia.

  18. Exploiting range imagery: techniques and applications

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter

    2009-07-01

    Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.

  19. EpiTools, A software suite for presurgical brain mapping in epilepsy: Intracerebral EEG.

    PubMed

    Medina Villalon, S; Paz, R; Roehri, N; Lagarde, S; Pizzo, F; Colombet, B; Bartolomei, F; Carron, R; Bénar, C-G

    2018-06-01

    In pharmacoresistant epilepsy, exploration with depth electrodes can be needed to precisely define the epileptogenic zone. Accurate location of these electrodes is thus essential for the interpretation of Stereotaxic EEG (SEEG) signals. As SEEG analysis increasingly relies on signal processing, it is crucial to make a link between these results and patient's anatomy. Our aims were thus to develop a suite of software tools, called "EpiTools", able to i) precisely and automatically localize the position of each SEEG contact and ii) display the results of signal analysis in each patient's anatomy. The first tool, GARDEL (GUI for Automatic Registration and Depth Electrode Localization), is able to automatically localize SEEG contacts and to label each contact according to a pre-specified nomenclature (for instance that of FreeSurfer or MarsAtlas). The second tool, 3Dviewer, enables to visualize in the 3D anatomy of the patient the origin of signal processing results such as rate of biomarkers, connectivity graphs or Epileptogenicity Index. GARDEL was validated in 30 patients by clinicians and proved to be highly reliable to determine within the patient's individual anatomy the actual location of contacts. GARDEL is a fully automatic electrode localization tool needing limited user interaction (only for electrode naming or contact correction). The 3Dviewer is able to read signal processing results and to display them in link with patient's anatomy. EpiTools can help speeding up the interpretation of SEEG data and improving its precision. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Automatic Segmentation of High-Throughput RNAi Fluorescent Cellular Images

    PubMed Central

    Yan, Pingkum; Zhou, Xiaobo; Shah, Mubarak; Wong, Stephen T. C.

    2010-01-01

    High-throughput genome-wide RNA interference (RNAi) screening is emerging as an essential tool to assist biologists in understanding complex cellular processes. The large number of images produced in each study make manual analysis intractable; hence, automatic cellular image analysis becomes an urgent need, where segmentation is the first and one of the most important steps. In this paper, a fully automatic method for segmentation of cells from genome-wide RNAi screening images is proposed. Nuclei are first extracted from the DNA channel by using a modified watershed algorithm. Cells are then extracted by modeling the interaction between them as well as combining both gradient and region information in the Actin and Rac channels. A new energy functional is formulated based on a novel interaction model for segmenting tightly clustered cells with significant intensity variance and specific phenotypes. The energy functional is minimized by using a multiphase level set method, which leads to a highly effective cell segmentation method. Promising experimental results demonstrate that automatic segmentation of high-throughput genome-wide multichannel screening can be achieved by using the proposed method, which may also be extended to other multichannel image segmentation problems. PMID:18270043

  1. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  2. Aging Influences the Neural Correlates of Lexical Decision but Not Automatic Semantic Priming

    PubMed Central

    Andersen, Anders H.; Jicha, Greg A.; Smith, Charles D.

    2009-01-01

    Human behavioral data indicate that older adults are slower to perform lexical decisions (LDs) than young adults but show similar reaction time gains when these decisions are primed semantically. The present study explored the functional neuroanatomic bases of these frequently observed behavioral findings. Young and older groups completed unprimed and primed LD tasks while functional magnetic resonance imaging (fMRI) was recorded, using a fully randomized trial design paralleling those used in behavioral research. Results from the unprimed task found that age-related slowing of LD was associated with decreased activation in perceptual extrastriate regions and increased activation in regions associated with higher level linguistic processes, including prefrontal cortex. In contrast to these age-related changes in brain activation, the older group showed a preserved pattern of fMRI decreases in inferior temporal cortex when LD was primed semantically. These findings provide evidence that older adults’ LD abilities benefit from contexts that reduce the need for frontally mediated strategic processes and capitalize on the continued sensitivity of inferior temporal cortex to automatic semantic processes in aging. PMID:19273460

  3. Phase editing as a signal pre-processing step for automated bearing fault detection

    NASA Astrophysics Data System (ADS)

    Barbini, L.; Ompusunggu, A. P.; Hillis, A. J.; du Bois, J. L.; Bartic, A.

    2017-07-01

    Scheduled maintenance and inspection of bearing elements in industrial machinery contributes significantly to the operating costs. Savings can be made through automatic vibration-based damage detection and prognostics, to permit condition-based maintenance. However automation of the detection process is difficult due to the complexity of vibration signals in realistic operating environments. The sensitivity of existing methods to the choice of parameters imposes a requirement for oversight from a skilled operator. This paper presents a novel approach to the removal of unwanted vibrational components from the signal: phase editing. The approach uses a computationally-efficient full-band demodulation and requires very little oversight. Its effectiveness is tested on experimental data sets from three different test-rigs, and comparisons are made with two state-of-the-art processing techniques: spectral kurtosis and cepstral pre- whitening. The results from the phase editing technique show a 10% improvement in damage detection rates compared to the state-of-the-art while simultaneously improving on the degree of automation. This outcome represents a significant contribution in the pursuit of fully automatic fault detection.

  4. Machine-Aided Translation: From Terminology Banks to Interactive Translation Systems.

    ERIC Educational Resources Information Center

    Greenfield, Concetta C.; Serain, Daniel

    The rapid growth of the need for technical translations in recent years has led specialists to utilize computer technology to improve the efficiency and quality of translation. The two approaches considered were automatic translation and terminology banks. Since the results of fully automatic translation were considered unsatisfactory by various…

  5. A Flexible and Configurable Architecture for Automatic Control Remote Laboratories

    ERIC Educational Resources Information Center

    Kalúz, Martin; García-Zubía, Javier; Fikar, Miroslav; Cirka, Luboš

    2015-01-01

    In this paper, we propose a novel approach in hardware and software architecture design for implementation of remote laboratories for automatic control. In our contribution, we show the solution with flexible connectivity at back-end, providing features of multipurpose usage with different types of experimental devices, and fully configurable…

  6. Fully automatic detection and visualization of patient specific coronary supply regions

    NASA Astrophysics Data System (ADS)

    Fritz, Dominik; Wiedemann, Alexander; Dillmann, Ruediger; Scheuering, Michael

    2008-03-01

    Coronary territory maps, which associate myocardial regions with the corresponding coronary artery that supply them, are a common visualization technique to assist the physician in the diagnosis of coronary artery disease. However, the commonly used visualization is based on the AHA-17-segment model, which is an empirical population based model. Therefore, it does not necessarily cope with the often highly individual coronary anatomy of a specific patient. In this paper we introduce a novel fully automatic approach to compute the patient individual coronary supply regions in CTA datasets. This approach is divided in three consecutive steps. First, the aorta is fully automatically located in the dataset with a combination of a Hough transform and a cylindrical model matching approach. Having the location of the aorta, a segmentation and skeletonization of the coronary tree is triggered. In the next step, the three main branches (LAD, LCX and RCX) are automatically labeled, based on the knowledge of the pose of the aorta and the left ventricle. In the last step the labeled coronary tree is projected on the left ventricular surface, which can afterward be subdivided into the coronary supply regions, based on a Voronoi transform. The resulting supply regions can be either shown in 3D on the epicardiac surface of the left ventricle, or as a subdivision of a polarmap.

  7. Drag and drop simulation: from pictures to full three-dimensional simulations

    NASA Astrophysics Data System (ADS)

    Bergmann, Michel; Iollo, Angelo

    2014-11-01

    We present a suite of methods to achieve ``drag and drop'' simulation, i.e., to fully automatize the process to perform thee-dimensional flow simulations around a bodies defined by actual images of moving objects. The overall approach requires a skeleton graph generation to get level set function from pictures, optimal transportation to get body velocity on the surface and then flow simulation thanks to a cartesian method based on penalization. We illustrate this paradigm simulating the swimming of a mackerel fish.

  8. A Patch-Based Approach for the Segmentation of Pathologies: Application to Glioma Labelling.

    PubMed

    Cordier, Nicolas; Delingette, Herve; Ayache, Nicholas

    2016-04-01

    In this paper, we describe a novel and generic approach to address fully-automatic segmentation of brain tumors by using multi-atlas patch-based voting techniques. In addition to avoiding the local search window assumption, the conventional patch-based framework is enhanced through several simple procedures: an improvement of the training dataset in terms of both label purity and intensity statistics, augmented features to implicitly guide the nearest-neighbor-search, multi-scale patches, invariance to cube isometries, stratification of the votes with respect to cases and labels. A probabilistic model automatically delineates regions of interest enclosing high-probability tumor volumes, which allows the algorithm to achieve highly competitive running time despite minimal processing power and resources. This method was evaluated on Multimodal Brain Tumor Image Segmentation challenge datasets. State-of-the-art results are achieved, with a limited learning stage thus restricting the risk of overfit. Moreover, segmentation smoothness does not involve any post-processing.

  9. Nondestructive Vibratory Testing and Evaluation Procedure for Military Roads and Streets.

    DTIC Science & Technology

    1984-07-01

    the addition of an auto- matic data acquisition system to the instrumentation control panel. This system , presently available, would automatically ...the data used to further develop and define the basic correlations. c. Consideration be given to installing an automatic data acquisi- tion system to...glows red any time the force generator is not fully elevated. Depressing this switch will stop the automatic cycle at any point and clear all system

  10. Adaptive road crack detection system by pavement classification.

    PubMed

    Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro

    2011-01-01

    This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement.

  11. Adaptive Road Crack Detection System by Pavement Classification

    PubMed Central

    Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro

    2011-01-01

    This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement. PMID:22163717

  12. Esophagus segmentation in CT via 3D fully convolutional neural network and random walk.

    PubMed

    Fechter, Tobias; Adebahr, Sonja; Baltas, Dimos; Ben Ayed, Ismail; Desrosiers, Christian; Dolz, Jose

    2017-12-01

    Precise delineation of organs at risk is a crucial task in radiotherapy treatment planning for delivering high doses to the tumor while sparing healthy tissues. In recent years, automated segmentation methods have shown an increasingly high performance for the delineation of various anatomical structures. However, this task remains challenging for organs like the esophagus, which have a versatile shape and poor contrast to neighboring tissues. For human experts, segmenting the esophagus from CT images is a time-consuming and error-prone process. To tackle these issues, we propose a random walker approach driven by a 3D fully convolutional neural network (CNN) to automatically segment the esophagus from CT images. First, a soft probability map is generated by the CNN. Then, an active contour model (ACM) is fitted to the CNN soft probability map to get a first estimation of the esophagus location. The outputs of the CNN and ACM are then used in conjunction with a probability model based on CT Hounsfield (HU) values to drive the random walker. Training and evaluation were done on 50 CTs from two different datasets, with clinically used peer-reviewed esophagus contours. Results were assessed regarding spatial overlap and shape similarity. The esophagus contours generated by the proposed algorithm showed a mean Dice coefficient of 0.76 ± 0.11, an average symmetric square distance of 1.36 ± 0.90 mm, and an average Hausdorff distance of 11.68 ± 6.80, compared to the reference contours. These results translate to a very good agreement with reference contours and an increase in accuracy compared to existing methods. Furthermore, when considering the results reported in the literature for the publicly available Synapse dataset, our method outperformed all existing approaches, which suggests that the proposed method represents the current state-of-the-art for automatic esophagus segmentation. We show that a CNN can yield accurate estimations of esophagus location, and that the results of this model can be refined by a random walk step taking pixel intensities and neighborhood relationships into account. One of the main advantages of our network over previous methods is that it performs 3D convolutions, thus fully exploiting the 3D spatial context and performing an efficient volume-wise prediction. The whole segmentation process is fully automatic and yields esophagus delineations in very good agreement with the gold standard, showing that it can compete with previously published methods. © 2017 American Association of Physicists in Medicine.

  13. Develop Advanced Nonlinear Signal Analysis Topographical Mapping System

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1997-01-01

    During the development of the SSME, a hierarchy of advanced signal analysis techniques for mechanical signature analysis has been developed by NASA and AI Signal Research Inc. (ASRI) to improve the safety and reliability for Space Shuttle operations. These techniques can process and identify intelligent information hidden in a measured signal which is often unidentifiable using conventional signal analysis methods. Currently, due to the highly interactive processing requirements and the volume of dynamic data involved, detailed diagnostic analysis is being performed manually which requires immense man-hours with extensive human interface. To overcome this manual process, NASA implemented this program to develop an Advanced nonlinear signal Analysis Topographical Mapping System (ATMS) to provide automatic/unsupervised engine diagnostic capabilities. The ATMS will utilize a rule-based Clips expert system to supervise a hierarchy of diagnostic signature analysis techniques in the Advanced Signal Analysis Library (ASAL). ASAL will perform automatic signal processing, archiving, and anomaly detection/identification tasks in order to provide an intelligent and fully automated engine diagnostic capability. The ATMS has been successfully developed under this contract. In summary, the program objectives to design, develop, test and conduct performance evaluation for an automated engine diagnostic system have been successfully achieved. Software implementation of the entire ATMS system on MSFC's OISPS computer has been completed. The significance of the ATMS developed under this program is attributed to the fully automated coherence analysis capability for anomaly detection and identification which can greatly enhance the power and reliability of engine diagnostic evaluation. The results have demonstrated that ATMS can significantly save time and man-hours in performing engine test/flight data analysis and performance evaluation of large volumes of dynamic test data.

  14. Initial clinical trial of a closed loop, fully automatic intra-aortic balloon pump.

    PubMed

    Kantrowitz, A; Freed, P S; Cardona, R R; Gage, K; Marinescu, G N; Westveld, A H; Litch, B; Suzuki, A; Hayakawa, H; Takano, T

    1992-01-01

    A new generation, closed loop, fully automatic intraaortic balloon pump (CL-IABP) system continuously optimizes diastolic augmentation by adjusting balloon pump parameters beat by beat without operator intervention. In dogs in sinus rhythm and with experimentally induced arrhythmias, the new CL-IABP system provided safe, effective augmentation. To investigate the system's suitability for clinical use, 10 patients meeting standard indications for IABP were studied. The patients were pumped by the fully automatic IABP system for an average of 20 hr (range, 1-48 hr). At start-up, the system optimized pumping parameters within 7-20 sec. Evaluation of 186 recordings made at hourly intervals showed that inflation began within 20 msec of the dicrotic notch 99% of the time. In 100% of the recordings, deflation straddled the first half of ventricular ejection. Peak pressure across the balloon membrane averaged 55 mmHg and, in no case, exceeded 100 mmHg. Examination of the data showed that as soon as the system was actuated it provided consistently beneficial diastolic augmentation without any further operator intervention. Eight patients improved and two died (one of irreversible cardiogenic shock and one of ischemic cardiomyopathy). No complications were attributable to the investigational aspects of the system. A fully automated IABP is feasible in the clinical setting, and it may have advantages relative to current generation IABP systems.

  15. Some data processing requirements for precision Nap-Of-the-Earth (NOE) guidance and control of rotorcraft

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Mcruer, Duane T.; Magdeleno, Raymond E.

    1987-01-01

    Nap-Of-the-Earth (NOE) flight in a conventional helicopter is extremely taxing for two pilots under visual conditions. Developing a single pilot all-weather NOE capability will require a fully automatic NOE navigation and flight control capability for which innovative guidance and control concepts were examined. Constrained time-optimality provides a validated criterion for automatically controlled NOE maneuvers if the pilot is to have confidence in the automated maneuvering technique. A second focus was to organize the storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan. A method is presented for using pre-flight geodetic parameter identification to establish guidance commands for planned flight profiles and alternates. A method is then suggested for interpolating this guidance command information with the aid of forward and side looking sensors within the resolution of the stored data base, enriching the data content with real-time display, guidance, and control purposes. A third focus defined a class of automatic anticipative guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles and to address the effects of processing delays in digital guidance and control system candidates. The results of this three-fold research effort offer promising alternatives designed to gain pilot acceptance for automatic guidance and control of rotorcraft in NOE operations.

  16. Visual analysis of trash bin processing on garbage trucks in low resolution video

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Loibner, Gernot

    2015-03-01

    We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.

  17. An Intelligent Automation Platform for Rapid Bioprocess Design.

    PubMed

    Wu, Tianyi; Zhou, Yuhong

    2014-08-01

    Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user's inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. © 2013 Society for Laboratory Automation and Screening.

  18. An Intelligent Automation Platform for Rapid Bioprocess Design

    PubMed Central

    Wu, Tianyi

    2014-01-01

    Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user’s inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. PMID:24088579

  19. Recent Research on the Automated Mass Measuring System

    NASA Astrophysics Data System (ADS)

    Yao, Hong; Ren, Xiao-Ping; Wang, Jian; Zhong, Rui-Lin; Ding, Jing-An

    The research development of robotic measurement system as well as the representative automatic system were introduced in the paper, and then discussed a sub-multiple calibration scheme adopted on a fully-automatic CCR10 system effectively. Automatic robot system can be able to perform the dissemination of the mass scale without any manual intervention as well as the fast speed calibration of weight samples against a reference weight. At the last, evaluation of the expanded uncertainty was given out.

  20. "Rate My Therapist": Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing

    PubMed Central

    Xiao, Bo; Imel, Zac E.; Georgiou, Panayiotis G.; Atkins, David C.; Narayanan, Shrikanth S.

    2015-01-01

    The technology for evaluating patient-provider interactions in psychotherapy–observational coding–has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies. PMID:26630392

  1. Interactive computer aided technology, evolution in the design/manufacturing process

    NASA Technical Reports Server (NTRS)

    English, C. H.

    1975-01-01

    A powerful computer-operated three dimensional graphic system and associated auxiliary computer equipment used in advanced design, production design, and manufacturing was described. This system has made these activities more productive than when using older and more conventional methods to design and build aerospace vehicles. With the use of this graphic system, designers are now able to define parts using a wide variety of geometric entities, define parts as fully surface 3-dimensional models as well as "wire-frame" models. Once geometrically defined, the designer is able to take section cuts of the surfaced model and automatically determine all of the section properties of the planar cut, lightpen detect all of the surface patches and automatically determine the volume and weight of the part. Further, his designs are defined mathematically at a degree of accuracy never before achievable.

  2. The Use of Opto-Electronics in Viscometry.

    ERIC Educational Resources Information Center

    Mazza, R. J.; Washbourn, D. H.

    1982-01-01

    Describes a semi-automatic viscometer which incorporates a microprocessor system and uses optoelectronics to detect flow of liquid through the capillary, flow time being displayed on a timer with accuracy of 0.01 second. The system could be made fully automatic with an additional microprocessor circuit and inclusion of a pump. (Author/JN)

  3. INFORMATION STORAGE AND RETRIEVAL, REPORTS ON EVALUATION PROCEDURES AND RESULTS 1965-1967.

    ERIC Educational Resources Information Center

    SALTON, GERALD

    A DETAILED ANALYSIS OF THE RETRIEVAL EVALUATION RESULTS OBTAINED WITH THE AUTOMATIC SMART DOCUMENT RETRIEVAL SYSTEM FOR DOCUMENT COLLECTIONS IN THE FIELDS OF AERODYNAMICS, COMPUTER SCIENCE, AND DOCUMENTATION IS GIVEN IN THIS REPORT. THE VARIOUS COMPONENTS OF FULLY AUTOMATIC DOCUMENT RETRIEVAL SYSTEMS ARE DISCUSSED IN DETAIL, INCLUDING THE FORMS OF…

  4. Atlas-based fuzzy connectedness segmentation and intensity nonuniformity correction applied to brain MRI.

    PubMed

    Zhou, Yongxin; Bai, Jing

    2007-01-01

    A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.

  5. Automatic bone detection and soft tissue aware ultrasound-CT registration for computer-aided orthopedic surgery.

    PubMed

    Wein, Wolfgang; Karamalis, Athanasios; Baumgartner, Adrian; Navab, Nassir

    2015-06-01

    The transfer of preoperative CT data into the tracking system coordinates within an operating room is of high interest for computer-aided orthopedic surgery. In this work, we introduce a solution for intra-operative ultrasound-CT registration of bones. We have developed methods for fully automatic real-time bone detection in ultrasound images and global automatic registration to CT. The bone detection algorithm uses a novel bone-specific feature descriptor and was thoroughly evaluated on both in-vivo and ex-vivo data. A global optimization strategy aligns the bone surface, followed by a soft tissue aware intensity-based registration to provide higher local registration accuracy. We evaluated the system on femur, tibia and fibula anatomy in a cadaver study with human legs, where magnetically tracked bone markers were implanted to yield ground truth information. An overall median system error of 3.7 mm was achieved on 11 datasets. Global and fully automatic registration of bones aquired with ultrasound to CT is feasible, with bone detection and tracking operating in real time for immediate feedback to the surgeon.

  6. Application of Artificial Intelligence to Improve Aircraft Survivability.

    DTIC Science & Technology

    1985-12-01

    may be as smooth and effective as possible. 3. Fully Automatic Digital Engine Control ( FADEC ) Under development at the Naval Weapons Center, a major...goal of the FADEC program is to significantly reduce engine vulnerability by fully automating the regulation of engine controls. Given a thrust

  7. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  8. Automatic Aircraft Structural Topology Generation for Multidisciplinary Optimization and Weight Estimation

    NASA Technical Reports Server (NTRS)

    Sensmeier, Mark D.; Samareh, Jamshid A.

    2005-01-01

    An approach is proposed for the application of rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process. This should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. A demonstration of this process is shown for two sample aircraft wing designs.

  9. Fully automatic region of interest selection in glomerular filtration rate estimation from 99mTc-DTPA renogram.

    PubMed

    Lin, Kun-Ju; Huang, Jia-Yann; Chen, Yung-Sheng

    2011-12-01

    Glomerular filtration rate (GFR) is a common accepted standard estimation of renal function. Gamma camera-based methods for estimating renal uptake of (99m)Tc-diethylenetriaminepentaacetic acid (DTPA) without blood or urine sampling have been widely used. Of these, the method introduced by Gates has been the most common method. Currently, most of gamma cameras are equipped with a commercial program for GFR determination, a semi-quantitative analysis by manually drawing region of interest (ROI) over each kidney. Then, the GFR value can be computed from the scintigraphic determination of (99m)Tc-DTPA uptake within the kidney automatically. Delineating the kidney area is difficult when applying a fixed threshold value. Moreover, hand-drawn ROIs are tedious, time consuming, and dependent highly on operator skill. Thus, we developed a fully automatic renal ROI estimation system based on the temporal changes in intensity counts, intensity-pair distribution image contrast enhancement method, adaptive thresholding, and morphological operations that can locate the kidney area and obtain the GFR value from a (99m)Tc-DTPA renogram. To evaluate the performance of the proposed approach, 30 clinical dynamic renograms were introduced. The fully automatic approach failed in one patient with very poor renal function. Four patients had a unilateral kidney, and the others had bilateral kidneys. The automatic contours from the remaining 54 kidneys were compared with the contours of manual drawing. The 54 kidneys were included for area error and boundary error analyses. There was high correlation between two physicians' manual contours and the contours obtained by our approach. For area error analysis, the mean true positive area overlap is 91%, the mean false negative is 13.4%, and the mean false positive is 9.3%. The boundary error is 1.6 pixels. The GFR calculated using this automatic computer-aided approach is reproducible and may be applied to help nuclear medicine physicians in clinical practice.

  10. Automatic segmentation of abdominal organs and adipose tissue compartments in water-fat MRI: Application to weight-loss in obesity.

    PubMed

    Shen, Jun; Baum, Thomas; Cordes, Christian; Ott, Beate; Skurk, Thomas; Kooijman, Hendrik; Rummeny, Ernst J; Hauner, Hans; Menze, Bjoern H; Karampinos, Dimitrios C

    2016-09-01

    To develop a fully automatic algorithm for abdominal organs and adipose tissue compartments segmentation and to assess organ and adipose tissue volume changes in longitudinal water-fat magnetic resonance imaging (MRI) data. Axial two-point Dixon images were acquired in 20 obese women (age range 24-65, BMI 34.9±3.8kg/m(2)) before and after a four-week calorie restriction. Abdominal organs, subcutaneous adipose tissue (SAT) compartments (abdominal, anterior, posterior), SAT regions along the feet-head direction and regional visceral adipose tissue (VAT) were assessed by a fully automatic algorithm using morphological operations and a multi-atlas-based segmentation method. The accuracy of organ segmentation represented by Dice coefficients ranged from 0.672±0.155 for the pancreas to 0.943±0.023 for the liver. Abdominal SAT changes were significantly greater in the posterior than the anterior SAT compartment (-11.4%±5.1% versus -9.5%±6.3%, p<0.001). The loss of VAT that was not located around any organ (-16.1%±8.9%) was significantly greater than the loss of VAT 5cm around liver, left and right kidney, spleen, and pancreas (p<0.05). The presented fully automatic algorithm showed good performance in abdominal adipose tissue and organ segmentation, and allowed the detection of SAT and VAT subcompartments changes during weight loss. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Automatic initialization and quality control of large-scale cardiac MRI segmentations.

    PubMed

    Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F

    2018-01-01

    Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. TU-C-BRE-11: 3D EPID-Based in Vivo Dosimetry: A Major Step Forward Towards Optimal Quality and Safety in Radiation Oncology Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mijnheer, B; Mans, A; Olaciregui-Ruiz, I

    Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is donemore » offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.« less

  13. A synchronized computational architecture for generalized bilateral control of robot arms

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Szakaly, Zoltan

    1987-01-01

    This paper describes a computational architecture for an interconnected high speed distributed computing system for generalized bilateral control of robot arms. The key method of the architecture is the use of fully synchronized, interrupt driven software. Since an objective of the development is to utilize the processing resources efficiently, the synchronization is done in the hardware level to reduce system software overhead. The architecture also achieves a balaced load on the communication channel. The paper also describes some architectural relations to trading or sharing manual and automatic control.

  14. Motor automaticity in Parkinson’s disease

    PubMed Central

    Wu, Tao; Hallett, Mark; Chan, Piu

    2017-01-01

    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson’s disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  15. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  16. An Open-Source Automated Peptide Synthesizer Based on Arduino and Python.

    PubMed

    Gali, Hariprasad

    2017-10-01

    The development of the first open-source automated peptide synthesizer, PepSy, using Arduino UNO and readily available components is reported. PepSy was primarily designed to synthesize small peptides in a relatively small scale (<100 µmol). Scripts to operate PepSy in a fully automatic or manual mode were written in Python. Fully automatic script includes functions to carry out resin swelling, resin washing, single coupling, double coupling, Fmoc deprotection, ivDde deprotection, on-resin oxidation, end capping, and amino acid/reagent line cleaning. Several small peptides and peptide conjugates were successfully synthesized on PepSy with reasonably good yields and purity depending on the complexity of the peptide.

  17. Convolutional neural networks with balanced batches for facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Battini Sönmez, Elena; Cangelosi, Angelo

    2017-03-01

    This paper considers the issue of fully automatic emotion classification on 2D faces. In spite of the great effort done in recent years, traditional machine learning approaches based on hand-crafted feature extraction followed by the classification stage failed to develop a real-time automatic facial expression recognition system. The proposed architecture uses Convolutional Neural Networks (CNN), which are built as a collection of interconnected processing elements to simulate the brain of human beings. The basic idea of CNNs is to learn a hierarchical representation of the input data, which results in a better classification performance. In this work we present a block-based CNN algorithm, which uses noise, as data augmentation technique, and builds batches with a balanced number of samples per class. The proposed architecture is a very simple yet powerful CNN, which can yield state-of-the-art accuracy on the very competitive benchmark algorithm of the Extended Cohn Kanade database.

  18. Angular relational signature-based chest radiograph image view classification.

    PubMed

    Santosh, K C; Wendling, Laurent

    2018-01-22

    In a computer-aided diagnosis (CAD) system, especially for chest radiograph or chest X-ray (CXR) screening, CXR image view information is required. Automatically separating CXR image view, frontal and lateral can ease subsequent CXR screening process, since the techniques may not equally work for both views. We present a novel technique to classify frontal and lateral CXR images, where we introduce angular relational signature through force histogram to extract features and apply three different state-of-the-art classifiers: multi-layer perceptron, random forest, and support vector machine to make a decision. We validated our fully automatic technique on a set of 8100 images hosted by the U.S. National Library of Medicine (NLM), National Institutes of Health (NIH), and achieved an accuracy close to 100%. Our method outperforms the state-of-the-art methods in terms of processing time (less than or close to 2 s for the whole test data) while the accuracies can be compared, and therefore, it justifies its practicality. Graphical Abstract Interpreting chest X-ray (CXR) through the angular relational signature.

  19. Computerized image analysis for acetic acid induced intraepithelial lesions

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Ferris, Daron G.; Lieberman, Rich W.

    2008-03-01

    Cervical Intraepithelial Neoplasia (CIN) exhibits certain morphologic features that can be identified during a visual inspection exam. Immature and dysphasic cervical squamous epithelium turns white after application of acetic acid during the exam. The whitening process occurs visually over several minutes and subjectively discriminates between dysphasic and normal tissue. Digital imaging technologies allow us to assist the physician analyzing the acetic acid induced lesions (acetowhite region) in a fully automatic way. This paper reports a study designed to measure multiple parameters of the acetowhitening process from two images captured with a digital colposcope. One image is captured before the acetic acid application, and the other is captured after the acetic acid application. The spatial change of the acetowhitening is extracted using color and texture information in the post acetic acid image; the temporal change is extracted from the intensity and color changes between the post acetic acid and pre acetic acid images with an automatic alignment. The imaging and data analysis system has been evaluated with a total of 99 human subjects and demonstrate its potential to screening underserved women where access to skilled colposcopists is limited.

  20. Fully automated tumor segmentation based on improved fuzzy connectedness algorithm in brain MR images.

    PubMed

    Harati, Vida; Khayati, Rasoul; Farzan, Abdolreza

    2011-07-01

    Uncontrollable and unlimited cell growth leads to tumor genesis in the brain. If brain tumors are not diagnosed early and cured properly, they could cause permanent brain damage or even death to patients. As in all methods of treatments, any information about tumor position and size is important for successful treatment; hence, finding an accurate and a fully automated method to give information to physicians is necessary. A fully automatic and accurate method for tumor region detection and segmentation in brain magnetic resonance (MR) images is suggested. The presented approach is an improved fuzzy connectedness (FC) algorithm based on a scale in which the seed point is selected automatically. This algorithm is independent of the tumor type in terms of its pixels intensity. Tumor segmentation evaluation results based on similarity criteria (similarity index (SI), overlap fraction (OF), and extra fraction (EF) are 92.89%, 91.75%, and 3.95%, respectively) indicate a higher performance of the proposed approach compared to the conventional methods, especially in MR images, in tumor regions with low contrast. Thus, the suggested method is useful for increasing the ability of automatic estimation of tumor size and position in brain tissues, which provides more accurate investigation of the required surgery, chemotherapy, and radiotherapy procedures. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Using a 3D profiler and infrared camera to monitor oven loading in fully cooked meat operations

    NASA Astrophysics Data System (ADS)

    Stewart, John; Giorges, Aklilu

    2009-05-01

    Ensuring meat is fully cooked is an important food safety issue for operations that produce "ready to eat" products. In order to kill harmful pathogens like Salmonella, all of the product must reach a minimum threshold temperature. Producers typically overcook the majority of the product to ensure meat in the most difficult scenario reaches the desired temperature. A difficult scenario can be caused by an especially thick piece of meat or by a surge of product into the process. Overcooking wastes energy, degrades product quality, lowers the maximum throughput rate of the production line and decreases product yield. At typical production rates of 6000lbs/hour, these losses from overcooking can have a significant cost impact on producers. A wide area 3D camera coupled with a thermal camera was used to measure the thermal mass variability of chicken breasts in a cooking process. Several types of variability are considered including time varying thermal mass (mass x temperature / time), variation in individual product geometry and variation in product temperature. The automatic identification of product arrangement issues that affect cooking such as overlapping product and folded products is also addressed. A thermal model is used along with individual product geometry and oven cook profiles to predict the percentage of product that will be overcooked and to identify products that may not fully cook in a given process.

  2. Quantification of regional fat volume in rat MRI

    NASA Astrophysics Data System (ADS)

    Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren

    2003-05-01

    Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.

  3. Automatic detection of larynx cancer from contrast-enhanced magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Doshi, Trushali; Soraghan, John; Grose, Derek; MacKenzie, Kenneth; Petropoulakis, Lykourgos

    2015-03-01

    Detection of larynx cancer from medical imaging is important for the quantification and for the definition of target volumes in radiotherapy treatment planning (RTP). Magnetic resonance imaging (MRI) is being increasingly used in RTP due to its high resolution and excellent soft tissue contrast. Manually detecting larynx cancer from sequential MRI is time consuming and subjective. The large diversity of cancer in terms of geometry, non-distinct boundaries combined with the presence of normal anatomical regions close to the cancer regions necessitates the development of automatic and robust algorithms for this task. A new automatic algorithm for the detection of larynx cancer from 2D gadoliniumenhanced T1-weighted (T1+Gd) MRI to assist clinicians in RTP is presented. The algorithm employs edge detection using spatial neighborhood information of pixels and incorporates this information in a fuzzy c-means clustering process to robustly separate different tissues types. Furthermore, it utilizes the information of the expected cancerous location for cancer regions labeling. Comparison of this automatic detection system with manual clinical detection on real T1+Gd axial MRI slices of 2 patients (24 MRI slices) with visible larynx cancer yields an average dice similarity coefficient of 0.78+/-0.04 and average root mean square error of 1.82+/-0.28 mm. Preliminary results show that this fully automatic system can assist clinicians in RTP by obtaining quantifiable and non-subjective repeatable detection results in a particular time-efficient and unbiased fashion.

  4. Quantitative analysis of the patellofemoral motion pattern using semi-automatic processing of 4D CT data.

    PubMed

    Forsberg, Daniel; Lindblom, Maria; Quick, Petter; Gauffin, Håkan

    2016-09-01

    To present a semi-automatic method with minimal user interaction for quantitative analysis of the patellofemoral motion pattern. 4D CT data capturing the patellofemoral motion pattern of a continuous flexion and extension were collected for five patients prone to patellar luxation both pre- and post-surgically. For the proposed method, an observer would place landmarks in a single 3D volume, which then are automatically propagated to the other volumes in a time sequence. From the landmarks in each volume, the measures patellar displacement, patellar tilt and angle between femur and tibia were computed. Evaluation of the observer variability showed the proposed semi-automatic method to be favorable over a fully manual counterpart, with an observer variability of approximately 1.5[Formula: see text] for the angle between femur and tibia, 1.5 mm for the patellar displacement, and 4.0[Formula: see text]-5.0[Formula: see text] for the patellar tilt. The proposed method showed that surgery reduced the patellar displacement and tilt at maximum extension with approximately 10-15 mm and 15[Formula: see text]-20[Formula: see text] for three patients but with less evident differences for two of the patients. A semi-automatic method suitable for quantification of the patellofemoral motion pattern as captured by 4D CT data has been presented. Its observer variability is on par with that of other methods but with the distinct advantage to support continuous motions during the image acquisition.

  5. Automatic zebrafish heartbeat detection and analysis for zebrafish embryos.

    PubMed

    Pylatiuk, Christian; Sanchez, Daniela; Mikut, Ralf; Alshut, Rüdiger; Reischl, Markus; Hirth, Sofia; Rottbauer, Wolfgang; Just, Steffen

    2014-08-01

    A fully automatic detection and analysis method of heartbeats in videos of nonfixed and nonanesthetized zebrafish embryos is presented. This method reduces the manual workload and time needed for preparation and imaging of the zebrafish embryos, as well as for evaluating heartbeat parameters such as frequency, beat-to-beat intervals, and arrhythmicity. The method is validated by a comparison of the results from automatic and manual detection of the heart rates of wild-type zebrafish embryos 36-120 h postfertilization and of embryonic hearts with bradycardia and pauses in the cardiac contraction.

  6. Fully automatic segmentation of the femur from 3D-CT images using primitive shape recognition and statistical shape models.

    PubMed

    Ben Younes, Lassad; Nakajima, Yoshikazu; Saito, Toki

    2014-03-01

    Femur segmentation is well established and widely used in computer-assisted orthopedic surgery. However, most of the robust segmentation methods such as statistical shape models (SSM) require human intervention to provide an initial position for the SSM. In this paper, we propose to overcome this problem and provide a fully automatic femur segmentation method for CT images based on primitive shape recognition and SSM. Femur segmentation in CT scans was performed using primitive shape recognition based on a robust algorithm such as the Hough transform and RANdom SAmple Consensus. The proposed method is divided into 3 steps: (1) detection of the femoral head as sphere and the femoral shaft as cylinder in the SSM and the CT images, (2) rigid registration between primitives of SSM and CT image to initialize the SSM into the CT image, and (3) fitting of the SSM to the CT image edge using an affine transformation followed by a nonlinear fitting. The automated method provided good results even with a high number of outliers. The difference of segmentation error between the proposed automatic initialization method and a manual initialization method is less than 1 mm. The proposed method detects primitive shape position to initialize the SSM into the target image. Based on primitive shapes, this method overcomes the problem of inter-patient variability. Moreover, the results demonstrate that our method of primitive shape recognition can be used for 3D SSM initialization to achieve fully automatic segmentation of the femur.

  7. Fully automatic characterization and data collection from crystals of biological macromolecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander

    A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to themore » optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.« less

  8. Landslide Phenomena in Sevan National Park-Armenia

    NASA Astrophysics Data System (ADS)

    Lazarov, Dimitrov; Minchev, Dimitar; Aleksanyan, Gurgen; Ilieva, Maya

    2010-12-01

    Based on data from master and slave complex images obtained on 30 August 2008 and 4 October 2008 by satellite ENVISAT with ASAR sensor,all processing chain is performed to evaluate landslides phenomena in Sevan National park - Republic of Armenia. For this purpose Identification Deformation Inspection and Observation Tool developed by Berlin University of Technology is applied. This software package uses a freely available DEM of the Shuttle Radar Topography Mission (SRTM) and performs a fully automatic generation of differential SAR interferograms from ENVISAT single look complex SAR data. All interferometric processing steps are implemented with maximum quality and precision. The results illustrate almost calm Earth surface in the area of Sevan Lake.

  9. Automatic segmentation of vessels in in-vivo ultrasound scans

    NASA Astrophysics Data System (ADS)

    Tamimi-Sarnikowski, Philip; Brink-Kjær, Andreas; Moshavegh, Ramin; Arendt Jensen, Jørgen

    2017-03-01

    Ultrasound has become highly popular to monitor atherosclerosis, by scanning the carotid artery. The screening involves measuring the thickness of the vessel wall and diameter of the lumen. An automatic segmentation of the vessel lumen, can enable the determination of lumen diameter. This paper presents a fully automatic segmentation algorithm, for robustly segmenting the vessel lumen in longitudinal B-mode ultrasound images. The automatic segmentation is performed using a combination of B-mode and power Doppler images. The proposed algorithm includes a series of preprocessing steps, and performs a vessel segmentation by use of the marker-controlled watershed transform. The ultrasound images used in the study were acquired using the bk3000 ultrasound scanner (BK Ultrasound, Herlev, Denmark) with two transducers "8L2 Linear" and "10L2w Wide Linear" (BK Ultrasound, Herlev, Denmark). The algorithm was evaluated empirically and applied to a dataset of in-vivo 1770 images recorded from 8 healthy subjects. The segmentation results were compared to manual delineation performed by two experienced users. The results showed a sensitivity and specificity of 90.41+/-11.2 % and 97.93+/-5.7% (mean+/-standard deviation), respectively. The amount of overlap of segmentation and manual segmentation, was measured by the Dice similarity coefficient, which was 91.25+/-11.6%. The empirical results demonstrated the feasibility of segmenting the vessel lumen in ultrasound scans using a fully automatic algorithm.

  10. Automatic testing and assessment of neuroanatomy using a digital brain atlas: method and development of computer- and mobile-based applications.

    PubMed

    Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar

    2009-10-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints.

  11. ROSAS: a robotic station for atmosphere and surface characterization dedicated to on-orbit calibration

    NASA Astrophysics Data System (ADS)

    Meygret, Aimé; Santer, Richard P.; Berthelot, Béatrice

    2011-10-01

    La Crau test site is used by CNES since 1987 for vicarious calibration of SPOT cameras. The former calibration activities were conducted during field campaigns devoted to the characterization of the atmosphere and the site reflectances. Since 1997, au automatic photometric station (ROSAS) was set up on the site on a 10m height pole. This station measures at different wavelengths, the solar extinction and the sky radiances to fully characterize the optical properties of the atmosphere. It also measures the upwelling radiance over the ground to fully characterize the surface reflectance properties. The photometer samples the spectrum from 380nm to 1600nm with 9 narrow bands. Every non cloudy days the photometer automatically and sequentially performs its measurements. Data are transmitted by GSM (Global System for Mobile communications) to CNES and processed. The photometer is calibrated in situ over the sun for irradiance and cross-band calibration, and over the Rayleigh scattering for the short wavelengths radiance calibration. The data are processed by an operational software which calibrates the photometer, estimates the atmosphere properties, computes the bidirectional reflectance distribution function of the site, then simulates the top of atmosphere radiance seen by any sensor over-passing the site and calibrates it. This paper describes the instrument, its measurement protocol and its calibration principle. Calibration results are discussed and compared to laboratory calibration. It details the surface reflectance characterization and presents SPOT4 calibration results deduced from the estimated TOA radiance. The results are compared to the official calibration.

  12. Automatic pedicles detection using convolutional neural network in a 3D spine reconstruction from biplanar radiographs

    NASA Astrophysics Data System (ADS)

    Bakhous, Christine; Aubert, Benjamin; Vazquez, Carlos; Cresson, Thierry; Parent, Stefan; De Guise, Jacques

    2018-02-01

    The 3D analysis of the spine deformities (scoliosis) has a high potential in its clinical diagnosis and treatment. In a biplanar radiographs context, a 3D analysis requires a 3D reconstruction from a pair of 2D X-rays. Whether being fully-/semiautomatic or manual, this task is complex because of the noise, the structure superimposition and partial information due to a limited projections number. Being involved in the axial vertebra rotation (AVR), which is a fundamental clinical parameter for scoliosis diagnosis, pedicles are important landmarks for the 3D spine modeling and pre-operative planning. In this paper, we focus on the extension of a fully-automatic 3D spine reconstruction method where the Vertebral Body Centers (VBCs) are automatically detected using Convolutional Neural Network (CNN) and then regularized using a Statistical Shape Model (SSM) framework. In this global process, pedicles are inferred statistically during the SSM regularization. Our contribution is to add a CNN-based regression model for pedicle detection allowing a better pedicle localization and improving the clinical parameters estimation (e.g. AVR, Cobb angle). Having 476 datasets including healthy patients and Adolescent Idiopathic Scoliosis (AIS) cases with different scoliosis grades (Cobb angles up to 116°), we used 380 for training, 48 for testing and 48 for validation. Adding the local CNN-based pedicle detection decreases the mean absolute error of the AVR by 10%. The 3D mean Euclidian distance error between detected pedicles and ground truth decreases by 17% and the maximum error by 19%. Moreover, a general improvement is observed in the 3D spine reconstruction and reflected in lower errors on the Cobb angle estimation.

  13. ARCOCT: Automatic detection of lumen border in intravascular OCT images.

    PubMed

    Cheimariotis, Grigorios-Aris; Chatzizisis, Yiannis S; Koutkias, Vassilis G; Toutouzas, Konstantinos; Giannopoulos, Andreas; Riga, Maria; Chouvarda, Ioanna; Antoniadis, Antonios P; Doulaverakis, Charalambos; Tsamboulatidis, Ioannis; Kompatsiaris, Ioannis; Giannoglou, George D; Maglaveras, Nicos

    2017-11-01

    Intravascular optical coherence tomography (OCT) is an invaluable tool for the detection of pathological features on the arterial wall and the investigation of post-stenting complications. Computational lumen border detection in OCT images is highly advantageous, since it may support rapid morphometric analysis. However, automatic detection is very challenging, since OCT images typically include various artifacts that impact image clarity, including features such as side branches and intraluminal blood presence. This paper presents ARCOCT, a segmentation method for fully-automatic detection of lumen border in OCT images. ARCOCT relies on multiple, consecutive processing steps, accounting for image preparation, contour extraction and refinement. In particular, for contour extraction ARCOCT employs the transformation of OCT images based on physical characteristics such as reflectivity and absorption of the tissue and, for contour refinement, local regression using weighted linear least squares and a 2nd degree polynomial model is employed to achieve artifact and small-branch correction as well as smoothness of the artery mesh. Our major focus was to achieve accurate contour delineation in the various types of OCT images, i.e., even in challenging cases with branches and artifacts. ARCOCT has been assessed in a dataset of 1812 images (308 from stented and 1504 from native segments) obtained from 20 patients. ARCOCT was compared against ground-truth manual segmentation performed by experts on the basis of various geometric features (e.g. area, perimeter, radius, diameter, centroid, etc.) and closed contour matching indicators (the Dice index, the Hausdorff distance and the undirected average distance), using standard statistical analysis methods. The proposed method was proven very efficient and close to the ground-truth, exhibiting non statistically-significant differences for most of the examined metrics. ARCOCT allows accurate and fully-automated lumen border detection in OCT images. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Development of a microcontroller-based automatic control system for the electrohydraulic total artificial heart.

    PubMed

    Kim, H C; Khanwilkar, P S; Bearnson, G B; Olsen, D B

    1997-01-01

    An automatic physiological control system for the actively filled, alternately pumped ventricles of the volumetrically coupled, electrohydraulic total artificial heart (EHTAH) was developed for long-term use. The automatic control system must ensure that the device: 1) maintains a physiological response of cardiac output, 2) compensates for an nonphysiological condition, and 3) is stable, reliable, and operates at a high power efficiency. The developed automatic control system met these requirements both in vitro, in week-long continuous mock circulation tests, and in vivo, in acute open-chested animals (calves). Satisfactory results were also obtained in a series of chronic animal experiments, including 21 days of continuous operation of the fully automatic control mode, and 138 days of operation in a manual mode, in a 159-day calf implant.

  15. Aesthetic Pleasure versus Aesthetic Interest: The Two Routes to Aesthetic Liking

    PubMed Central

    Graf, Laura K. M.; Landwehr, Jan R.

    2017-01-01

    Although existing research has established that aesthetic pleasure and aesthetic interest are two distinct positive aesthetic responses, empirical research on aesthetic preferences usually considers only aesthetic liking to capture participants’ aesthetic response. This causes some fundamental contradictions in the literature; some studies find a positive relationship between easy-to-process stimulus characteristics and aesthetic liking, while others suggest a negative relationship. The present research addresses these empirical contradictions by investigating the dual character of aesthetic liking as manifested in both the pleasure and interest components. Based on the Pleasure-Interest Model of Aesthetic Liking (PIA Model; Graf and Landwehr, 2015), two studies investigated the formation of pleasure and interest and their relationship with aesthetic liking responses. Using abstract art as the stimuli, Study 1 employed a 3 (stimulus fluency: low, medium, high) × 2 (processing style: automatic, controlled) × 2 (aesthetic response: pleasure, interest) experimental design to examine the processing dynamics responsible for experiencing aesthetic pleasure versus aesthetic interest. We find that the effect of stimulus fluency on pleasure is mediated by a gut-level fluency experience. Stimulus fluency and interest, by contrast, are related through a process of disfluency reduction, such that disfluent stimuli that grow more fluent due to processing efforts become interesting. The second study employed product designs (bikes, chairs, and lamps) as stimuli and a 2 (fluency: low, high) × 2 (processing style: automatic, controlled) × 3 (product type: bike, chair, lamp) experimental design to examine pleasure and interest as mediators of the relationship between stimulus fluency and design attractiveness. With respect to lamps and chairs, the results suggest that the effect of stimulus fluency on attractiveness is fully mediated by aesthetic pleasure, especially in the automatic processing style. Conversely, disfluent product designs can enhance design attractiveness judgments due to interest when a controlled processing style is adopted. PMID:28194119

  16. Multi-spectral brain tissue segmentation using automatically trained k-Nearest-Neighbor classification.

    PubMed

    Vrooman, Henri A; Cocosco, Chris A; van der Lijn, Fedde; Stokking, Rik; Ikram, M Arfan; Vernooij, Meike W; Breteler, Monique M B; Niessen, Wiro J

    2007-08-01

    Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible.

  17. 4D Near Real-Time Environmental Monitoring Using Highly Temporal LiDAR

    NASA Astrophysics Data System (ADS)

    Höfle, Bernhard; Canli, Ekrem; Schmitz, Evelyn; Crommelinck, Sophie; Hoffmeister, Dirk; Glade, Thomas

    2016-04-01

    The last decade has witnessed extensive applications of 3D environmental monitoring with the LiDAR technology, also referred to as laser scanning. Although several automatic methods were developed to extract environmental parameters from LiDAR point clouds, only little research has focused on highly multitemporal near real-time LiDAR (4D-LiDAR) for environmental monitoring. Large potential of applying 4D-LiDAR is given for landscape objects with high and varying rates of change (e.g. plant growth) and also for phenomena with sudden unpredictable changes (e.g. geomorphological processes). In this presentation we will report on the most recent findings of the research projects 4DEMON (http://uni-heidelberg.de/4demon) and NoeSLIDE (https://geomorph.univie.ac.at/forschung/projekte/aktuell/noeslide/). The method development in both projects is based on two real-world use cases: i) Surface parameter derivation of agricultural crops (e.g. crop height) and ii) change detection of landslides. Both projects exploit the "full history" contained in the LiDAR point cloud time series. One crucial initial step of 4D-LiDAR analysis is the co-registration over time, 3D-georeferencing and time-dependent quality assessment of the LiDAR point cloud time series. Due to the high amount of datasets (e.g. one full LiDAR scan per day), the procedure needs to be performed fully automatically. Furthermore, the online near real-time 4D monitoring system requires to set triggers that can detect removal or moving of tie reflectors (used for co-registration) or the scanner itself. This guarantees long-term data acquisition with high quality. We will present results from a georeferencing experiment for 4D-LiDAR monitoring, which performs benchmarking of co-registration, 3D-georeferencing and also fully automatic detection of events (e.g. removal/moving of reflectors or scanner). Secondly, we will show our empirical findings of an ongoing permanent LiDAR observation of a landslide (Gresten, Austria) and an agricultural maize crop stand (Heidelberg, Germany). This research demonstrates the potential and also limitations of fully automated, near real-time 4D LiDAR monitoring in geosciences.

  18. A new user-assisted segmentation and tracking technique for an object-based video editing system

    NASA Astrophysics Data System (ADS)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  19. Information Pre-Processing using Domain Meta-Ontology and Rule Learning System

    NASA Astrophysics Data System (ADS)

    Ranganathan, Girish R.; Biletskiy, Yevgen

    Around the globe, extraordinary amounts of documents are being created by Enterprises and by users outside these Enterprises. The documents created in the Enterprises constitute the main focus of the present chapter. These documents are used to perform numerous amounts of machine processing. While using thesedocuments for machine processing, lack of semantics of the information in these documents may cause misinterpretation of the information, thereby inhibiting the productiveness of computer assisted analytical work. Hence, it would be profitable to the Enterprises if they use well defined domain ontologies which will serve as rich source(s) of semantics for the information in the documents. These domain ontologies can be created manually, semi-automatically or fully automatically. The focus of this chapter is to propose an intermediate solution which will enable relatively easy creation of these domain ontologies. The process of extracting and capturing domain ontologies from these voluminous documents requires extensive involvement of domain experts and application of methods of ontology learning that are substantially labor intensive; therefore, some intermediate solutions which would assist in capturing domain ontologies must be developed. This chapter proposes a solution in this direction which involves building a meta-ontology that will serve as an intermediate information source for the main domain ontology. This chapter proposes a solution in this direction which involves building a meta-ontology as a rapid approach in conceptualizing a domain of interest from huge amount of source documents. This meta-ontology can be populated by ontological concepts, attributes and relations from documents, and then refined in order to form better domain ontology either through automatic ontology learning methods or some other relevant ontology building approach.

  20. From functional structure to packaging: full-printing fabrication of a microfluidic chip.

    PubMed

    Zheng, Fengyi; Pu, Zhihua; He, Enqi; Huang, Jiasheng; Yu, Bocheng; Li, Dachao; Li, Zhihong

    2018-05-24

    This paper presents a concept of a full-printing methodology aiming at convenient and fast fabrication of microfluidic devices. For the first time, we achieved a microfluidic biochemical sensor with all functional structures fabricated by inkjet printing, including electrodes, immobilized enzymes, microfluidic components and packaging. With the cost-effective and rapid process, this method provides the possibility of quick model validation of a novel lab-on-chip system. In this study, a three-electrode electrochemical system was integrated successfully with glucose oxidase immobilization gel and sealed in an ice channel, forming a disposable microfluidic sensor for glucose detection. This fully-printed chip was characterized and showed good sensitivity and a linear section at a low-level concentration of glucose (0-10 mM). With the aid of automatic equipment, the fully-printed sensor can be massively produced with low cost.

  1. Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional networks

    NASA Astrophysics Data System (ADS)

    Roth, Holger; Oda, Masahiro; Shimizu, Natsuki; Oda, Hirohisa; Hayashi, Yuichiro; Kitasaka, Takayuki; Fujiwara, Michitaka; Misawa, Kazunari; Mori, Kensaku

    2018-03-01

    Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 +/- 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.

  2. Beyond standard model calculations with Sherpa

    DOE PAGES

    Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; ...

    2015-03-24

    We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.

  3. Beyond standard model calculations with Sherpa.

    PubMed

    Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; Siegert, Frank

    We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in Beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.

  4. Sentry: An Automated Close Approach Monitoring System for Near-Earth Objects

    NASA Astrophysics Data System (ADS)

    Chamberlin, A. B.; Chesley, S. R.; Chodas, P. W.; Giorgini, J. D.; Keesey, M. S.; Wimberly, R. N.; Yeomans, D. K.

    2001-11-01

    In response to international concern about potential asteroid impacts on Earth, NASA's Near-Earth Object (NEO) Program Office has implemented a new system called ``Sentry'' to automatically update the orbits of all NEOs on a daily basis and compute Earth close approaches up to 100 years into the future. Results are published on our web site (http://neo.jpl.nasa.gov/) and updated orbits and ephemerides made available via the JPL Horizons ephemeris service (http://ssd.jpl.nasa.gov/horizons.html). Sentry collects new and revised astrometric observations from the Minor Planet Center (MPC) via their electronic circulars (MPECs) in near real time as well as radar and optical astrometry sent directly from observers. NEO discoveries and identifications are detected in MPECs and processed appropriately. In addition to these daily updates, Sentry synchronizes with each monthly batch of MPC astrometry and automatically updates all NEO observation files. Daily and monthly processing of NEO astrometry is managed using a queuing system which allows for manual intervention of selected NEOs without interfering with the automatic system. At the heart of Sentry is a fully automatic orbit determination program which handles outlier rejection and ensures convergence in the new solution. Updated orbital elements and their covariances are published via Horizons and our NEO web site, typically within 24 hours. A new version of Horizons, in development, will allow computation of ephemeris uncertainties using covariance data. The positions of NEOs with updated orbits are numerically integrated up to 100 years into the future and each close approach to any perturbing body in our dynamic model (all planets, Moon, Ceres, Pallas, Vesta) is recorded. Significant approaches are flagged for extended analysis including Monte Carlo studies. Results, such as minimum encounter distances and future Earth impact probabilities, are published on our NEO web site.

  5. Flexible Manufacturing System Handbook. Volume IV. Appendices

    DTIC Science & Technology

    1983-02-01

    and Acceptance Test(s)" on page 26 of this Proposal Request. 1.1.10 Options 1. Centralized Automatic Chip/Coolant Recovery System a. Scope The...viable, from manual- ly moving the pallet/fixture/part combinations from machine to machine to fully automatic , unmanned material handling systems , such...English. Where dimensions are shown in metric units, the English system (inch) equivalent will also be shown. Hydraulic, pneumatic , and electrical

  6. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    PubMed

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  7. Acoustic emission source location in complex structures using full automatic delta T mapping technique

    NASA Astrophysics Data System (ADS)

    Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys

    2016-05-01

    An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.

  8. Fully Automated Quantification of the Striatal Uptake Ratio of [99mTc]-TRODAT with SPECT Imaging: Evaluation of the Diagnostic Performance in Parkinson's Disease and the Temporal Regression of Striatal Tracer Uptake

    PubMed Central

    Fang, Yu-Hua Dean; Chiu, Shao-Chieh; Lu, Chin-Song; Weng, Yi-Hsin

    2015-01-01

    Purpose. We aimed at improving the existing methods for the fully automatic quantification of striatal uptake of [99mTc]-TRODAT with SPECT imaging. Procedures. A normal [99mTc]-TRODAT template was first formed based on 28 healthy controls. Images from PD patients (n = 365) and nPD subjects (28 healthy controls and 33 essential tremor patients) were spatially normalized to the normal template. We performed an inverse transform on the predefined striatal and reference volumes of interest (VOIs) and applied the transformed VOIs to the original image data to calculate the striatal-to-reference ratio (SRR). The diagnostic performance of the SRR was determined through receiver operating characteristic (ROC) analysis. Results. The SRR measured with our new and automatic method demonstrated excellent diagnostic performance with 92% sensitivity, 90% specificity, 92% accuracy, and an area under the curve (AUC) of 0.94. For the evaluation of the mean SRR and the clinical duration, a quadratic function fit the data with R 2 = 0.84. Conclusions. We developed and validated a fully automatic method for the quantification of the SRR in a large study sample. This method has an excellent diagnostic performance and exhibits a strong correlation between the mean SRR and the clinical duration in PD patients. PMID:26366413

  9. Fully Automated Quantification of the Striatal Uptake Ratio of [(99m)Tc]-TRODAT with SPECT Imaging: Evaluation of the Diagnostic Performance in Parkinson's Disease and the Temporal Regression of Striatal Tracer Uptake.

    PubMed

    Fang, Yu-Hua Dean; Chiu, Shao-Chieh; Lu, Chin-Song; Yen, Tzu-Chen; Weng, Yi-Hsin

    2015-01-01

    We aimed at improving the existing methods for the fully automatic quantification of striatal uptake of [(99m)Tc]-TRODAT with SPECT imaging. A normal [(99m)Tc]-TRODAT template was first formed based on 28 healthy controls. Images from PD patients (n = 365) and nPD subjects (28 healthy controls and 33 essential tremor patients) were spatially normalized to the normal template. We performed an inverse transform on the predefined striatal and reference volumes of interest (VOIs) and applied the transformed VOIs to the original image data to calculate the striatal-to-reference ratio (SRR). The diagnostic performance of the SRR was determined through receiver operating characteristic (ROC) analysis. The SRR measured with our new and automatic method demonstrated excellent diagnostic performance with 92% sensitivity, 90% specificity, 92% accuracy, and an area under the curve (AUC) of 0.94. For the evaluation of the mean SRR and the clinical duration, a quadratic function fit the data with R (2) = 0.84. We developed and validated a fully automatic method for the quantification of the SRR in a large study sample. This method has an excellent diagnostic performance and exhibits a strong correlation between the mean SRR and the clinical duration in PD patients.

  10. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  11. 3D model assisted fully automated scanning laser Doppler vibrometer measurements

    NASA Astrophysics Data System (ADS)

    Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve

    2017-12-01

    In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle

  12. Determinants of wood dust exposure in the Danish furniture industry.

    PubMed

    Mikkelsen, Anders B; Schlunssen, Vivi; Sigsgaard, Torben; Schaumburg, Inger

    2002-11-01

    This paper investigates the relation between wood dust exposure in the furniture industry and occupational hygiene variables. During the winter 1997-98 54 factories were visited and 2362 personal, passive inhalable dust samples were obtained; the geometric mean was 0.95 mg/m(3) and the geometric standard deviation was 2.08. In a first measuring round 1685 dust concentrations were obtained. For some of the workers repeated measurements were carried out 1 (351) and 2 weeks (326) after the first measurement. Hygiene variables like job, exhaust ventilation, cleaning procedures, etc., were documented. A multivariate analysis based on mixed effects models was used with hygiene variables being fixed effects and worker, machine, department and factory being random effects. A modified stepwise strategy of model making was adopted taking into account the hierarchically structured variables and making possible the exclusion of non-influential random as well as fixed effects. For woodworking, the following determinants of exposure increase the dust concentration: manual and automatic sanding and use of compressed air with fully automatic and semi-automatic machines and for cleaning of work pieces. Decreased dust exposure resulted from the use of compressed air with manual machines, working at fully automatic or semi-automatic machines, functioning exhaust ventilation, work on the night shift, daily cleaning of rooms, cleaning of work pieces with a brush, vacuum cleaning of machines, supplementary fresh air intake and safety representative elected within the last 2 yr. For handling and assembling, increased exposure results from work at automatic machines and presence of wood dust on the workpieces. Work on the evening shift, supplementary fresh air intake, work in a chair factory and special cleaning staff produced decreased exposure to wood dust. The implications of the results for the prevention of wood dust exposure are discussed.

  13. Automatic segmentation and quantification of the cardiac structures from non-contrast-enhanced cardiac CT scans

    NASA Astrophysics Data System (ADS)

    Shahzad, Rahil; Bos, Daniel; Budde, Ricardo P. J.; Pellikaan, Karlijn; Niessen, Wiro J.; van der Lugt, Aad; van Walsum, Theo

    2017-05-01

    Early structural changes to the heart, including the chambers and the coronary arteries, provide important information on pre-clinical heart disease like cardiac failure. Currently, contrast-enhanced cardiac computed tomography angiography (CCTA) is the preferred modality for the visualization of the cardiac chambers and the coronaries. In clinical practice not every patient undergoes a CCTA scan; many patients receive only a non-contrast-enhanced calcium scoring CT scan (CTCS), which has less radiation dose and does not require the administration of contrast agent. Quantifying cardiac structures in such images is challenging, as they lack the contrast present in CCTA scans. Such quantification would however be relevant, as it enables population based studies with only a CTCS scan. The purpose of this work is therefore to investigate the feasibility of automatic segmentation and quantification of cardiac structures viz whole heart, left atrium, left ventricle, right atrium, right ventricle and aortic root from CTCS scans. A fully automatic multi-atlas-based segmentation approach is used to segment the cardiac structures. Results show that the segmentation overlap between the automatic method and that of the reference standard have a Dice similarity coefficient of 0.91 on average for the cardiac chambers. The mean surface-to-surface distance error over all the cardiac structures is 1.4+/- 1.7 mm. The automatically obtained cardiac chamber volumes using the CTCS scans have an excellent correlation when compared to the volumes in corresponding CCTA scans, a Pearson correlation coefficient (R) of 0.95 is obtained. Our fully automatic method enables large-scale assessment of cardiac structures on non-contrast-enhanced CT scans.

  14. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.

    We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less

  15. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    DOE PAGES

    Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.; ...

    2016-06-08

    We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less

  16. A clinically viable capsule endoscopy video analysis platform for automatic bleeding detection

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Jiao, Heng; Xie, Jean; Mui, Peter; Leighton, Jonathan A.; Pasha, Shabana; Rentz, Lauri; Abedi, Mahmood

    2013-02-01

    In this paper, we present a novel and clinically valuable software platform for automatic bleeding detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos for GI tract run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. As a result, the process is time consuming and is prone to disease miss-finding. While researchers have made efforts to automate this process, however, no clinically acceptable software is available on the marketplace today. Working with our collaborators, we have developed a clinically viable software platform called GISentinel for fully automated GI tract bleeding detection and classification. Major functional modules of the SW include: the innovative graph based NCut segmentation algorithm, the unique feature selection and validation method (e.g. illumination invariant features, color independent features, and symmetrical texture features), and the cascade SVM classification for handling various GI tract scenes (e.g. normal tissue, food particles, bubbles, fluid, and specular reflection). Initial evaluation results on the SW have shown zero bleeding instance miss-finding rate and 4.03% false alarm rate. This work is part of our innovative 2D/3D based GI tract disease detection software platform. While the overall SW framework is designed for intelligent finding and classification of major GI tract diseases such as bleeding, ulcer, and polyp from the CE videos, this paper will focus on the automatic bleeding detection functional module.

  17. Automated design of infrared digital metamaterials by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sugino, Yuya; Ishikawa, Atsushi; Hayashi, Yasuhiko; Tsuruta, Kenji

    2017-08-01

    We demonstrate automatic design of infrared (IR) metamaterials using a genetic algorithm (GA) and experimentally characterize their IR properties. To implement the automated design scheme of the metamaterial structures, we adopt a digital metamaterial consisting of 7 × 7 Au nano-pixels with an area of 200 nm × 200 nm, and their placements are coded as binary genes in the GA optimization process. The GA combined with three-dimensional (3D) finite element method (FEM) simulation is developed and applied to automatically construct a digital metamaterial to exhibit pronounced plasmonic resonances at the target IR frequencies. Based on the numerical results, the metamaterials are fabricated on a Si substrate over an area of 1 mm × 1 mm by using an EB lithography, Cr/Au (2/20 nm) depositions, and liftoff process. In the FT-IR measurement, pronounced plasmonic responses of each metamaterial are clearly observed near the targeted frequencies, although the synthesized pixel arrangements of the metamaterials are seemingly random. The corresponding numerical simulations reveal the important resonant behavior of each pixel and their hybridized systems. Our approach is fully computer-aided without artificial manipulation, thus paving the way toward the novel device design for next-generation plasmonic device applications.

  18. Near-instant automatic access to visually presented words in the human neocortex: neuromagnetic evidence.

    PubMed

    Shtyrov, Yury; MacGregor, Lucy J

    2016-05-24

    Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.

  19. Evaluation of arterial propagation velocity based on the automated analysis of the Pulse Wave Shape

    NASA Astrophysics Data System (ADS)

    Clara, F. M.; Scandurra, A. G.; Meschino, G. J.; Passoni, L. I.

    2011-12-01

    This paper proposes the automatic estimation of the arterial propagation velocity from the pulse wave raw records measured in the region of the radial artery. A fully automatic process is proposed to select and analyze typical pulse cycles from the raw data. An adaptive neuro-fuzzy inference system, together with a heuristic search is used to find a functional approximation of the pulse wave. The estimation of the propagation velocity is carried out via the analysis of the functional approximation obtained with the fuzzy model. The analysis of the pulse wave records with the proposed methodology showed small differences compared with the method used so far, based on a strong interaction with the user. To evaluate the proposed methodology, we estimated the propagation velocity in a population of healthy men from a wide range of ages. It has been found in these studies that propagation velocity increases linearly with age and it presents a considerable dispersion of values in healthy individuals. We conclude that this process could be used to evaluate indirectly the propagation velocity of the aorta, which is related to physiological age in healthy individuals and with the expectation of life in cardiovascular patients.

  20. A highly versatile automatized setup for quantitative measurements of PHIP enhancements

    NASA Astrophysics Data System (ADS)

    Kiryutin, Alexey S.; Sauer, Grit; Hadjiali, Sara; Yurkovskaya, Alexandra V.; Breitzke, Hergen; Buntkowsky, Gerd

    2017-12-01

    The design and application of a versatile and inexpensive experimental extension to NMR spectrometers is described that allows to carry out highly reproducible PHIP experiments directly in the NMR sample tube, i.e. under PASADENA condition, followed by the detection of the NMR spectra of hyperpolarized products with high spectral resolution. Employing this high resolution it is feasible to study kinetic processes in the solution with high accuracy. As a practical example the dissolution of hydrogen gas in the liquid and the PHIP kinetics during the hydrogenation reaction of Fmoc-O-propargyl-L-tyrosine in acetone-d6 are monitored. The timing of the setup is fully controlled by the pulse-programmer of the NMR spectrometer. By flushing with an inert gas it is possible to efficiently quench the hydrogenation reaction in a controlled fashion and to detect the relaxation of hyperpolarization without a background reaction. The proposed design makes it possible to carry out PHIP experiments in an automatic mode and reliably determine the enhancement of polarized signals.

  1. Intelligent Data Granulation on Load: Improving Infobright's Knowledge Grid

    NASA Astrophysics Data System (ADS)

    Ślęzak, Dominik; Kowalski, Marcin

    One of the major aspects of Infobright's relational database technology is automatic decomposition of each of data tables onto Rough Rows, each consisting of 64K of original rows. Rough Rows are automatically annotated by Knowledge Nodes that represent compact information about the rows' values. Query performance depends on the quality of Knowledge Nodes, i.e., their efficiency in minimizing the access to the compressed portions of data stored on disk, according to the specific query optimization procedures. We show how to implement the mechanism of organizing the incoming data into such Rough Rows that maximize the quality of the corresponding Knowledge Nodes. Given clear business-driven requirements, the implemented mechanism needs to be fully integrated with the data load process, causing no decrease in the data load speed. The performance gain resulting from better data organization is illustrated by some tests over our benchmark data. The differences between the proposed mechanism and some well-known procedures of database clustering or partitioning are discussed. The paper is a continuation of our patent application [22].

  2. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    PubMed

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  3. Automatic Lung Lobe Segmentation Using Particles, Thin Plate Splines, and Maximum a Posteriori Estimation

    PubMed Central

    Ross, James C.; Estépar, Raúl San José; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K.; Washko, George R.

    2011-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases. PMID:20879396

  4. Automatic Synthetic Aperture Radar based oil spill detection and performance estimation via a semi-automatic operational service benchmark.

    PubMed

    Singha, Suman; Vespe, Michele; Trieschmann, Olaf

    2013-08-15

    Today the health of ocean is in danger as it was never before mainly due to man-made pollutions. Operational activities show regular occurrence of accidental and deliberate oil spill in European waters. Since the areas covered by oil spills are usually large, satellite remote sensing particularly Synthetic Aperture Radar represents an effective option for operational oil spill detection. This paper describes the development of a fully automated approach for oil spill detection from SAR. Total of 41 feature parameters extracted from each segmented dark spot for oil spill and 'look-alike' classification and ranked according to their importance. The classification algorithm is based on a two-stage processing that combines classification tree analysis and fuzzy logic. An initial evaluation of this methodology on a large dataset has been carried out and degree of agreement between results from proposed algorithm and human analyst was estimated between 85% and 93% respectively for ENVISAT and RADARSAT. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer. This presentation will demonstrate this new technology through linear and non-linear shallow water test cases. The exceptionally simple model syntax will be highlighted and the correctness of the resulting adjoint simulations will be demonstrated using rigorous convergence tests.

  6. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    PubMed

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  7. Mobile GPU-based implementation of automatic analysis method for long-term ECG.

    PubMed

    Fan, Xiaomao; Yao, Qihang; Li, Ye; Chen, Runge; Cai, Yunpeng

    2018-05-03

    Long-term electrocardiogram (ECG) is one of the important diagnostic assistant approaches in capturing intermittent cardiac arrhythmias. Combination of miniaturized wearable holters and healthcare platforms enable people to have their cardiac condition monitored at home. The high computational burden created by concurrent processing of numerous holter data poses a serious challenge to the healthcare platform. An alternative solution is to shift the analysis tasks from healthcare platforms to the mobile computing devices. However, long-term ECG data processing is quite time consuming due to the limited computation power of the mobile central unit processor (CPU). This paper aimed to propose a novel parallel automatic ECG analysis algorithm which exploited the mobile graphics processing unit (GPU) to reduce the response time for processing long-term ECG data. By studying the architecture of the sequential automatic ECG analysis algorithm, we parallelized the time-consuming parts and reorganized the entire pipeline in the parallel algorithm to fully utilize the heterogeneous computing resources of CPU and GPU. The experimental results showed that the average executing time of the proposed algorithm on a clinical long-term ECG dataset (duration 23.0 ± 1.0 h per signal) is 1.215 ± 0.140 s, which achieved an average speedup of 5.81 ± 0.39× without compromising analysis accuracy, comparing with the sequential algorithm. Meanwhile, the battery energy consumption of the automatic ECG analysis algorithm was reduced by 64.16%. Excluding energy consumption from data loading, 79.44% of the energy consumption could be saved, which alleviated the problem of limited battery working hours for mobile devices. The reduction of response time and battery energy consumption in ECG analysis not only bring better quality of experience to holter users, but also make it possible to use mobile devices as ECG terminals for healthcare professions such as physicians and health advisers, enabling them to inspect patient ECG recordings onsite efficiently without the need of a high-quality wide-area network environment.

  8. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades

    2015-01-01

    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA gel electrophoresis images, called GELect, which was written in Java and made available through the imageJ framework. With a novel automated image processing workflow, the tool can accurately segment lanes from a gel matrix, intelligently extract distorted and even doublet bands that are difficult to identify by existing image processing tools. Consequently, genotyping from DNA gel electrophoresis can be performed automatically allowing users to efficiently conduct large scale DNA fingerprinting via DNA gel electrophoresis. The software is freely available from http://www.biotec.or.th/gi/tools/gelect.

  9. Automatic vision system for analysis of microscopic behavior of flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Dickenson, Eric; Daemi, M. Farhang

    1997-10-01

    This paper describes the development of a novel automated and efficient vision system to obtain velocity and concentration measurement within a porous medium. An aqueous fluid lace with a fluorescent dye to microspheres flows through a transparent, refractive-index-matched column packed with transparent crystals. For illumination purposes, a planar sheet of laser passes through the column as a CCD camera records all the laser illuminated planes. Detailed microscopic velocity and concentration fields have been computed within a 3D volume of the column. For measuring velocities, while the aqueous fluid, laced with fluorescent microspheres, flows through the transparent medium, a CCD camera records the motions of the fluorescing particles by a video cassette recorder. The recorded images are acquired automatically frame by frame and transferred to the computer for processing, by using a frame grabber an written relevant algorithms through an RS-232 interface. Since the grabbed image is poor in this stage, some preprocessings are used to enhance particles within images. Finally, these enhanced particles are monitored to calculate velocity vectors in the plane of the beam. For concentration measurements, while the aqueous fluid, laced with a fluorescent organic dye, flows through the transparent medium, a CCD camera sweeps back and forth across the column and records concentration slices on the planes illuminated by the laser beam traveling simultaneously with the camera. Subsequently, these recorded images are transferred to the computer for processing in similar fashion to the velocity measurement. In order to have a fully automatic vision system, several detailed image processing techniques are developed to match exact images that have different intensities values but the same topological characteristics. This results in normalized interstitial chemical concentrations as a function of time within the porous column.

  10. The Extraction of Terrace in the Loess Plateau Based on radial method

    NASA Astrophysics Data System (ADS)

    Liu, W.; Li, F.

    2016-12-01

    The terrace of Loess Plateau, as a typical kind of artificial landform and an important measure of soil and water conservation, its positioning and automatic extraction will simplify the work of land use investigation. The existing methods of terrace extraction mainly include visual interpretation and automatic extraction. The manual method is used in land use investigation, but it is time-consuming and laborious. Researchers put forward some automatic extraction methods. For example, Fourier transform method can recognize terrace and find accurate position from frequency domain image, but it is more affected by the linear objects in the same direction of terrace; Texture analysis method is simple and have a wide range application of image processing. The disadvantage of texture analysis method is unable to recognize terraces' edge; Object-oriented is a new method of image classification, but when introduce it to terrace extracting, fracture polygons will be the most serious problem and it is difficult to explain its geological meaning. In order to positioning the terraces, we use high- resolution remote sensing image to extract and analyze the gray value of the pixels which the radial went through. During the recognition process, we firstly use the DEM data analysis or by manual selecting, to roughly confirm the position of peak points; secondly, take each of the peak points as the center to make radials in all directions; finally, extracting the gray values of the pixels which the radials went through, and analyzing its changing characteristics to confirm whether the terrace exists. For the purpose of getting accurate position of terrace, terraces' discontinuity, extension direction, ridge width, image processing algorithm, remote sensing image illumination and other influence factors were fully considered when designing the algorithms.

  11. GESA--a two-dimensional processing system using knowledge base techniques.

    PubMed

    Rowlands, D G; Flook, A; Payne, P I; van Hoff, A; Niblett, T; McKee, S

    1988-12-01

    The successful analysis of two-dimensional (2-D) polyacrylamide electrophoresis gels demands considerable experience and understanding of the protein system under investigation as well as knowledge of the separation technique itself. The present work concerns the development of a computer system for analysing 2-D electrophoretic separations which incorporates concepts derived from artificial intelligence research such that non-experts can use the technique as a diagnostic or identification tool. Automatic analysis of 2-D gel separations has proved to be extremely difficult using statistical methods. Non-reproducibility of gel separations is also difficult to overcome using automatic systems. However, the human eye is extremely good at recognising patterns in images, and human intervention in semi-automatic computer systems can reduce the computational complexities of fully automatic systems. Moreover, the expertise and understanding of an "expert" is invaluable in reducing system complexity if it can be encapsulated satisfactorily in an expert system. The combination of user-intervention in the computer system together with the encapsulation of expert knowledge characterises the present system. The domain within which the system has been developed is that of wheat grain storage proteins (gliadins) which exhibit polymorphism to such an extent that cultivars can be uniquely identified by their gliadin patterns. The system can be adapted to other domains where a range of polymorpic protein sub-units exist. In its generalised form, the system can also be used for comparing more complex 2-D gel electrophoretic separations.

  12. Automatic selection of optimal Savitzky-Golay filter parameters for Coronary Wave Intensity Analysis.

    PubMed

    Rivolo, Simone; Nagel, Eike; Smith, Nicolas P; Lee, Jack

    2014-01-01

    Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. The cWIA ability to establish a mechanistic link between coronary haemodynamics measurements and the underlying pathophysiology has been widely demonstrated. Moreover, the prognostic value of a cWIA-derived metric has been recently proved. However, the clinical application of cWIA has been hindered due to the strong dependence on the practitioners, mainly ascribable to the cWIA-derived indices sensitivity to the pre-processing parameters. Specifically, as recently demonstrated, the cWIA-derived metrics are strongly sensitive to the Savitzky-Golay (S-G) filter, typically used to smooth the acquired traces. This is mainly due to the inability of the S-G filter to deal with the different timescale features present in the measured waveforms. Therefore, we propose to apply an adaptive S-G algorithm that automatically selects pointwise the optimal filter parameters. The newly proposed algorithm accuracy is assessed against a cWIA gold standard, provided by a newly developed in-silico cWIA modelling framework, when physiological noise is added to the simulated traces. The adaptive S-G algorithm, when used to automatically select the polynomial degree of the S-G filter, provides satisfactory results with ≤ 10% error for all the metrics through all the levels of noise tested. Therefore, the newly proposed method makes cWIA fully automatic and independent from the practitioners, opening the possibility to multi-centre trials.

  13. Automatic Molar Extraction from Dental Panoramic Radiographs for Forensic Personal Identification

    NASA Astrophysics Data System (ADS)

    Samopa, Febriliyan; Asano, Akira; Taguchi, Akira

    Measurement of an individual molar provides rich information for forensic personal identification. We propose a computer-based system for extracting an individual molar from dental panoramic radiographs. A molar is obtained by extracting the region-of-interest, separating the maxilla and mandible, and extracting the boundaries between teeth. The proposed system is almost fully automatic; all that the user has to do is clicking three points on the boundary between the maxilla and the mandible.

  14. Automatic detection of spiculation of pulmonary nodules in computed tomography images

    NASA Astrophysics Data System (ADS)

    Ciompi, F.; Jacobs, C.; Scholten, E. T.; van Riel, S. J.; W. Wille, M. M.; Prokop, M.; van Ginneken, B.

    2015-03-01

    We present a fully automatic method for the assessment of spiculation of pulmonary nodules in low-dose Computed Tomography (CT) images. Spiculation is considered as one of the indicators of nodule malignancy and an important feature to assess in order to decide on a patient-tailored follow-up procedure. For this reason, lung cancer screening scenario would benefit from the presence of a fully automatic system for the assessment of spiculation. The presented framework relies on the fact that spiculated nodules mainly differ from non-spiculated ones in their morphology. In order to discriminate the two categories, information on morphology is captured by sampling intensity profiles along circular patterns on spherical surfaces centered on the nodule, in a multi-scale fashion. Each intensity profile is interpreted as a periodic signal, where the Fourier transform is applied, obtaining a spectrum. A library of spectra is created by clustering data via unsupervised learning. The centroids of the clusters are used to label back each spectrum in the sampling pattern. A compact descriptor encoding the nodule morphology is obtained as the histogram of labels along all the spherical surfaces and used to classify spiculated nodules via supervised learning. We tested our approach on a set of nodules from the Danish Lung Cancer Screening Trial (DLCST) dataset. Our results show that the proposed method outperforms other 3-D descriptors of morphology in the automatic assessment of spiculation.

  15. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    PubMed

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  16. Computer-aided diagnosis system: a Bayesian hybrid classification method.

    PubMed

    Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J

    2013-10-01

    A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. MetMSLine: an automated and fully integrated pipeline for rapid processing of high-resolution LC-MS metabolomic datasets.

    PubMed

    Edmands, William M B; Barupal, Dinesh K; Scalbert, Augustin

    2015-03-01

    MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker-MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC-MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. © The Author 2014. Published by Oxford University Press.

  18. MetMSLine: an automated and fully integrated pipeline for rapid processing of high-resolution LC–MS metabolomic datasets

    PubMed Central

    Edmands, William M. B.; Barupal, Dinesh K.; Scalbert, Augustin

    2015-01-01

    Summary: MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker—MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). Availability and implementation: All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC–MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. Contact: ScalbertA@iarc.fr PMID:25348215

  19. Automatic estimation of elasticity parameters in breast tissue

    NASA Astrophysics Data System (ADS)

    Skerl, Katrin; Cochran, Sandy; Evans, Andrew

    2014-03-01

    Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.

  20. ATPP: A Pipeline for Automatic Tractography-Based Brain Parcellation

    PubMed Central

    Li, Hai; Fan, Lingzhong; Zhuo, Junjie; Wang, Jiaojian; Zhang, Yu; Yang, Zhengyi; Jiang, Tianzi

    2017-01-01

    There is a longstanding effort to parcellate brain into areas based on micro-structural, macro-structural, or connectional features, forming various brain atlases. Among them, connectivity-based parcellation gains much emphasis, especially with the considerable progress of multimodal magnetic resonance imaging in the past two decades. The Brainnetome Atlas published recently is such an atlas that follows the framework of connectivity-based parcellation. However, in the construction of the atlas, the deluge of high resolution multimodal MRI data and time-consuming computation poses challenges and there is still short of publically available tools dedicated to parcellation. In this paper, we present an integrated open source pipeline (https://www.nitrc.org/projects/atpp), named Automatic Tractography-based Parcellation Pipeline (ATPP) to realize the framework of parcellation with automatic processing and massive parallel computing. ATPP is developed to have a powerful and flexible command line version, taking multiple regions of interest as input, as well as a user-friendly graphical user interface version for parcellating single region of interest. We demonstrate the two versions by parcellating two brain regions, left precentral gyrus and middle frontal gyrus, on two independent datasets. In addition, ATPP has been successfully utilized and fully validated in a variety of brain regions and the human Brainnetome Atlas, showing the capacity to greatly facilitate brain parcellation. PMID:28611620

  1. Towards automatic patient positioning and scan planning using continuously moving table MR imaging.

    PubMed

    Koken, Peter; Dries, Sebastian P M; Keupp, Jochen; Bystrov, Daniel; Pekar, Vladimir; Börnert, Peter

    2009-10-01

    A concept is proposed to simplify patient positioning and scan planning to improve ease of use and workflow in MR. After patient preparation in front of the scanner the operator selects the anatomy of interest by a single push-button action. Subsequently, the patient table is moved automatically into the scanner, while real-time 3D isotropic low-resolution continuously moving table scout scanning is performed using patient-independent MR system settings. With a real-time organ identification process running in parallel and steering the scanner, the target anatomy can be positioned fully automatically in the scanner's sensitive volume. The desired diagnostic examination of the anatomy of interest can be planned and continued immediately using the geometric information derived from the acquired 3D data. The concept was implemented and successfully tested in vivo in 12 healthy volunteers, focusing on the liver as the target anatomy. The positioning accuracy achieved was on the order of several millimeters, which turned out to be sufficient for initial planning purposes. Furthermore, the impact of nonoptimal system settings on the positioning performance, the signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) was investigated. The present work proved the basic concept of the proposed approach as an element of future scan automation. (c) 2009 Wiley-Liss, Inc.

  2. Assessment of automatic exposure control performance in digital mammography using a no-reference anisotropic quality index

    NASA Astrophysics Data System (ADS)

    Barufaldi, Bruno; Borges, Lucas R.; Bakic, Predrag R.; Vieira, Marcelo A. C.; Schiabel, Homero; Maidment, Andrew D. A.

    2017-03-01

    Automatic exposure control (AEC) is used in mammography to obtain acceptable radiation dose and adequate image quality regardless of breast thickness and composition. Although there are physics methods for assessing the AEC, it is not clear whether mammography systems operate with optimal dose and image quality in clinical practice. In this work, we propose the use of a normalized anisotropic quality index (NAQI), validated in previous studies, to evaluate the quality of mammograms acquired using AEC. The authors used a clinical dataset that consists of 561 patients and 1,046 mammograms (craniocaudal breast views). The results show that image quality is often maintained, even at various radiation levels (mean NAQI = 0.14 +/- 0.02). However, a more careful analysis of NAQI reveals that the average image quality decreases as breast thickness increases. The NAQI is reduced by 32% on average, when the breast thickness increases from 31 to 71 mm. NAQI also decreases with lower breast density. The variation in breast parenchyma alone cannot fully account for the decrease of NAQI with thickness. Examination of images shows that images of large, fatty breasts are often inadequately processed. This work shows that NAQI can be applied in clinical mammograms to assess mammographic image quality, and highlights the limitations of the automatic exposure control for some images.

  3. Probabilistic Perception, Empathy, and Dynamic Homeostasis: Insights in Autism Spectrum Disorders and Conduct Disorders

    PubMed Central

    Guilé, Jean Marc

    2013-01-01

    Homeostasis is not a permanent and stable state but instead results from conflicting forces. Therefore, infants have to engage in dynamic exchanges with their environment, in biological, cognitive, and affective domains. Empathy is an adaptive response to these environmental challenges, which contributes to reaching proper dynamic homeostasis and development. Empathy relies on implicit interactive processes, namely probabilistic perception and synchrony, which will be reviewed in the article. If typically-developed neonates are fully equipped to automatically and synchronously interact with their human environment, conduct disorders (CD) and autism spectrum disorders (ASD) present with impairments in empathetic communication, e.g., emotional arousal and facial emotion processing. In addition sensorimotor resonance is lacking in ASD, and emotional concern and semantic empathy are impaired in CD with Callous-Unemotional traits. PMID:24479115

  4. Computer-assisted automatic synthesis II. Development of a fully automated apparatus for preparing substituted N–(carboxyalkyl)amino acids

    PubMed Central

    Hayashi, Nobuyoshi; Sugawara, Tohru; Shintani, Motoaki; Kato, Shinji

    1989-01-01

    A versatile automated apparatus, equipped with an artificial intelligence has been developed which may be used to prepare and isolate a wide variety of compounds. The prediction of the optimum reaction conditions and the reaction control in real time, are accomplished using novel kinetic equations and substituent effects in an artificial intelligence software which has already reported [1]. This paper deals with the design and construction of the fully automated system, and its application to the synthesis of a substituted N-(carboxyalkyl)amino acid. The apparatus is composed of units for perfoming various tasks, e.g. reagent supply, reaction, purification and separation, each linked to a control system. All synthetic processes including washing and drying of the apparatus after each synthetic run were automatically performed from the mixing of the reactants to the isolation of the products as powders with purities of greater than 98%. The automated apparatus has been able to run for 24 hours per day, and the average rate of synthesis of substituted N-(carboxyalkyl)amino acids has been three compounds daily. The apparatus is extremely valuable for synthesizing many derivatives of one particular compound structure. Even if the chemical yields are low under the optimum conditions, it is still possible to obtain a sufficient amount of the desired product by repetition of the reaction. Moreover it was possible to greatly reduce the manual involvement of the many syntheses which are a necessary part of pharmaceutical research. PMID:18924679

  5. Fully automated MR liver volumetry using watershed segmentation coupled with active contouring.

    PubMed

    Huynh, Hieu Trung; Le-Trong, Ngoc; Bao, Pham The; Oto, Aytek; Suzuki, Kenji

    2017-02-01

    Our purpose is to develop a fully automated scheme for liver volume measurement in abdominal MR images, without requiring any user input or interaction. The proposed scheme is fully automatic for liver volumetry from 3D abdominal MR images, and it consists of three main stages: preprocessing, rough liver shape generation, and liver extraction. The preprocessing stage reduced noise and enhanced the liver boundaries in 3D abdominal MR images. The rough liver shape was revealed fully automatically by using the watershed segmentation, thresholding transform, morphological operations, and statistical properties of the liver. An active contour model was applied to refine the rough liver shape to precisely obtain the liver boundaries. The liver volumes calculated by the proposed scheme were compared to the "gold standard" references which were estimated by an expert abdominal radiologist. The liver volumes computed by using our developed scheme excellently agreed (Intra-class correlation coefficient was 0.94) with the "gold standard" manual volumes by the radiologist in the evaluation with 27 cases from multiple medical centers. The running time was 8.4 min per case on average. We developed a fully automated liver volumetry scheme in MR, which does not require any interaction by users. It was evaluated with cases from multiple medical centers. The liver volumetry performance of our developed system was comparable to that of the gold standard manual volumetry, and it saved radiologists' time for manual liver volumetry of 24.7 min per case.

  6. Development of advanced image analysis techniques for the in situ characterization of multiphase dispersions occurring in bioreactors.

    PubMed

    Galindo, Enrique; Larralde-Corona, C Patricia; Brito, Teresa; Córdova-Aguilar, Ma Soledad; Taboada, Blanca; Vega-Alvarado, Leticia; Corkidi, Gabriel

    2005-03-30

    Fermentation bioprocesses typically involve two liquid phases (i.e. water and organic compounds) and one gas phase (air), together with suspended solids (i.e. biomass), which are the components to be dispersed. Characterization of multiphase dispersions is required as it determines mass transfer efficiency and bioreactor homogeneity. It is also needed for the appropriate design of contacting equipment, helping in establishing optimum operational conditions. This work describes the development of image analysis based techniques with advantages (in terms of data acquisition and processing), for the characterization of oil drops and bubble diameters in complex simulated fermentation broths. The system consists of fully digital acquisition of in situ images obtained from the inside of a mixing tank using a CCD camera synchronized with a stroboscopic light source, which are processed with a versatile commercial software. To improve the automation of particle recognition and counting, the Hough transform (HT) was used, so bubbles and oil drops were automatically detected and the processing time was reduced by 55% without losing accuracy with respect to a fully manual analysis. The system has been used for the detailed characterization of a number of operational conditions, including oil content, biomass morphology, presence of surfactants (such as proteins) and viscosity of the aqueous phase.

  7. On the dependence of information display quality requirements upon human characteristics and pilot/automatics relations

    NASA Technical Reports Server (NTRS)

    Wilckens, V.

    1972-01-01

    Present information display concepts for pilot landing guidance are outlined considering manual control as well as substitution of man by fully competent automatics. Display improvements are achieved by compressing the distributed indicators into an accumulative display and thus reducing information scanning. Complete integration of quantitative indications, outer loop information, and real world display in a pictorial information channel geometry constitutes an interface with human ability to differentiate and integrate for optimal manual control of the aircraft.

  8. Fully automatic assignment of small molecules' NMR spectra without relying on chemical shift predictions.

    PubMed

    Castillo, Andrés M; Bernal, Andrés; Patiny, Luc; Wist, Julien

    2015-08-01

    We present a method for the automatic assignment of small molecules' NMR spectra. The method includes an automatic and novel self-consistent peak-picking routine that validates NMR peaks in each spectrum against peaks in the same or other spectra that are due to the same resonances. The auto-assignment routine used is based on branch-and-bound optimization and relies predominantly on integration and correlation data; chemical shift information may be included when available to fasten the search and shorten the list of viable assignments, but in most cases tested, it is not required in order to find the correct assignment. This automatic assignment method is implemented as a web-based tool that runs without any user input other than the acquired spectra. Copyright © 2015 John Wiley & Sons, Ltd.

  9. A fully automated meltwater monitoring and collection system for spatially distributed isotope analysis in snowmelt-dominated catchments

    NASA Astrophysics Data System (ADS)

    Rücker, Andrea; Boss, Stefan; Von Freyberg, Jana; Zappa, Massimiliano; Kirchner, James

    2016-04-01

    In many mountainous catchments the seasonal snowpack stores a significant volume of water, which is released as streamflow during the melting period. The predicted change in future climate will bring new challenges in water resource management in snow-dominated headwater catchments and their receiving lowlands. To improve predictions of hydrologic extreme events, particularly summer droughts, it is important characterize the relationship between winter snowpack and summer (low) flows in such areas (e.g., Godsey et al., 2014). In this context, stable water isotopes (18O, 2H) are a powerful tool for fingerprinting the sources of streamflow and tracing water flow pathways. For this reason, we have established an isotope sampling network in the Alptal catchment (46.4 km2) in Central-Switzerland as part of the SREP-Drought project (Snow Resources and the Early Prediction of hydrological DROUGHT in mountainous streams). Samples of precipitation (daily), snow cores (weekly) and runoff (daily) are analyzed for their isotopic signature in a regular cycle. Precipitation is also sampled along a horizontal transect at the valley bottom, and along an elevational transect. Additionally, the analysis of snow meltwater is of importance. As the sample collection of snow meltwater in mountainous terrain is often impractical, we have developed a fully automatic snow lysimeter system, which measures meltwater volume and collects samples for isotope analysis at daily intervals. The system consists of three lysimeters built from Decagon-ECRN-100 High Resolution Rain Gauges as standard component that allows monitoring of meltwater flow. Each lysimeter leads the meltwater into a 10-liter container that is automatically sampled and then emptied daily. These water samples are replaced regularly and analyzed afterwards on their isotopic composition in the lab. Snow melt events as well as system status can be monitored in real time. In our presentation we describe the automatic snow lysimeter system and present initial results from field tests in winter 2015/2016 under natural conditions at an experimental field site. Fully functional deployment in a forested and an open field location in the Erlenbach subcatchment (0.7 km2) is envisaged for winter 2016/2017. Godsey, S.E.,* J.W. Kirchner and C.L. Tague, Effects of changes in winter snowpacks on summer low flows: case studies in the Sierra Nevada, California, USA, Hydrological Processes, 28, 5048-5064, doi: 10.1002/hyp.9943, 2014.

  10. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a clinical context and showed a good accuracy both in ideal and in realistic conditions.

  11. The developing human connectome project: A minimal processing pipeline for neonatal cortical surface reconstruction.

    PubMed

    Makropoulos, Antonios; Robinson, Emma C; Schuh, Andreas; Wright, Robert; Fitzgibbon, Sean; Bozek, Jelena; Counsell, Serena J; Steinweg, Johannes; Vecchiato, Katy; Passerat-Palmbach, Jonathan; Lenz, Gregor; Mortari, Filippo; Tenev, Tencho; Duff, Eugene P; Bastiani, Matteo; Cordero-Grande, Lucilio; Hughes, Emer; Tusor, Nora; Tournier, Jacques-Donald; Hutter, Jana; Price, Anthony N; Teixeira, Rui Pedro A G; Murgasova, Maria; Victor, Suresh; Kelly, Christopher; Rutherford, Mary A; Smith, Stephen M; Edwards, A David; Hajnal, Joseph V; Jenkinson, Mark; Rueckert, Daniel

    2018-06-01

    The Developing Human Connectome Project (dHCP) seeks to create the first 4-dimensional connectome of early life. Understanding this connectome in detail may provide insights into normal as well as abnormal patterns of brain development. Following established best practices adopted by the WU-MINN Human Connectome Project (HCP), and pioneered by FreeSurfer, the project utilises cortical surface-based processing pipelines. In this paper, we propose a fully automated processing pipeline for the structural Magnetic Resonance Imaging (MRI) of the developing neonatal brain. This proposed pipeline consists of a refined framework for cortical and sub-cortical volume segmentation, cortical surface extraction, and cortical surface inflation, which has been specifically designed to address considerable differences between adult and neonatal brains, as imaged using MRI. Using the proposed pipeline our results demonstrate that images collected from 465 subjects ranging from 28 to 45 weeks post-menstrual age (PMA) can be processed fully automatically; generating cortical surface models that are topologically correct, and correspond well with manual evaluations of tissue boundaries in 85% of cases. Results improve on state-of-the-art neonatal tissue segmentation models and significant errors were found in only 2% of cases, where these corresponded to subjects with high motion. Downstream, these surfaces will enhance comparisons of functional and diffusion MRI datasets, supporting the modelling of emerging patterns of brain connectivity. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Onboard Autonomous Corrections for Accurate IRF Pointing.

    NASA Astrophysics Data System (ADS)

    Jorgensen, J. L.; Betto, M.; Denver, T.

    2002-05-01

    Over the past decade, the Noise Equivalent Angle (NEA) of onboard attitude reference instruments, has decreased from tens-of-arcseconds to the sub-arcsecond level. This improved performance is partly due to improved sensor-technology with enhanced signal to noise ratios, partly due to improved processing electronics which allows for more sophisticated and faster signal processing. However, the main reason for the increased precision, is the application of onboard autonomy, which apart from simple outlier rejection also allows for removal of "false positive" answers, and other "unexpected" noise sources, that otherwise would degrade the quality of the measurements (e.g. discrimination between signals caused by starlight and ionizing radiation). The utilization of autonomous signal processing has also provided the means for another onboard processing step, namely the autonomous recovery from lost in space, where the attitude instrument without a priori knowledge derive the absolute attitude, i.e. in IRF coordinates, within fractions of a second. Combined with precise orbital state or position data, the absolute attitude information opens for multiple ways to improve the mission performance, either by reducing operations costs, by increasing pointing accuracy, by reducing mission expendables, or by providing backup decision information in case of anomalies. The Advanced Stellar Compass's (ASC) is a miniature, high accuracy, attitude instrument which features fully autonomous operations. The autonomy encompass all direct steps from automatic health checkout at power-on, over fully automatic SEU and SEL handling and proton induced sparkle removal, to recovery from "lost in space", and optical disturbance detection and handling. But apart from these more obvious autonomy functions, the ASC also features functions to handle and remove the aforementioned residuals. These functions encompass diverse operators such as a full orbital state vector model with automatic cloud filtered GPS updates, a world time clock, astrometric correction tables, and a attitude output transform system, that allow the ASC to deliver the spacecraft attitude relative to the Inertial Reference Frame (IRF) in realtime. This paper describes the operations of the onboard autonomy of the ASC, which in realtime removes the residuals from the attitude measurements, whereby a timely IRF attitude at arcsecond level, is delivered to the AOCS (or sent to ground). A discussion about achievable robustness and accuracy is given, and compared to inflight results from the operations of the two Advanced Stellar Compass's (ASC), which are flying in LEO onboard the German geo-potential research satellite CHAMP. The ASC's onboard CHAMP are dual head versions, i.e. each processing unit is attached to two star camera heads. The dual head configuration is primarily employed to achieve a carefree AOCS control with respect to the Sun, Moon and Earth, and to increase the attitude accuracy, but it also enables onboard estimation and removal of thermal generated biases.

  13. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  14. Microaneurysm detection using fully convolutional neural networks.

    PubMed

    Chudzik, Piotr; Majumdar, Somshubra; Calivá, Francesco; Al-Diri, Bashir; Hunter, Andrew

    2018-05-01

    Diabetic retinopathy is a microvascular complication of diabetes that can lead to sight loss if treated not early enough. Microaneurysms are the earliest clinical signs of diabetic retinopathy. This paper presents an automatic method for detecting microaneurysms in fundus photographies. A novel patch-based fully convolutional neural network with batch normalization layers and Dice loss function is proposed. Compared to other methods that require up to five processing stages, it requires only three. Furthermore, to the best of the authors' knowledge, this is the first paper that shows how to successfully transfer knowledge between datasets in the microaneurysm detection domain. The proposed method was evaluated using three publicly available and widely used datasets: E-Ophtha, DIARETDB1, and ROC. It achieved better results than state-of-the-art methods using the FROC metric. The proposed algorithm accomplished highest sensitivities for low false positive rates, which is particularly important for screening purposes. Performance, simplicity, and robustness of the proposed method demonstrates its suitability for diabetic retinopathy screening applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Automation of surface observations program

    NASA Technical Reports Server (NTRS)

    Short, Steve E.

    1988-01-01

    At present, surface weather observing methods are still largely manual and labor intensive. Through the nationwide implementation of Automated Surface Observing Systems (ASOS), this situation can be improved. Two ASOS capability levels are planned. The first is a basic-level system which will automatically observe the weather parameters essential for aviation operations and will operate either with or without supplemental contributions by an observer. The second is a more fully automated, stand-alone system which will observe and report the full range of weather parameters and will operate primarily in the unattended mode. Approximately 250 systems are planned by the end of the decade. When deployed, these systems will generate the standard hourly and special long-line transmitted weather observations, as well as provide continuous weather information direct to airport users. Specific ASOS configurations will vary depending upon whether the operation is unattended, minimally attended, or fully attended. The major functions of ASOS are data collection, data processing, product distribution, and system control. The program phases of development, demonstration, production system acquisition, and operational implementation are described.

  16. 24 CFR 1710.506 - State/Federal filing requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... fully explaining the purpose and significance of the amendment and referring to that section and page of... automatically suspended as a result of the state action. No action need be taken by the Secretary to effect the...

  17. 24 CFR 1710.506 - State/Federal filing requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... fully explaining the purpose and significance of the amendment and referring to that section and page of... automatically suspended as a result of the state action. No action need be taken by the Secretary to effect the...

  18. Xenon International Automated Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-08-05

    The Xenon International Automated Control software monitors, displays status, and allows for manual operator control as well as fully automatic control of multiple commercial and PNNL designed hardware components to generate and transmit atmospheric radioxenon concentration measurements every six hours.

  19. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  20. Automatic spatiotemporal matching of detected pleural thickenings

    NASA Astrophysics Data System (ADS)

    Chaisaowong, Kraisorn; Keller, Simon Kai; Kraus, Thomas

    2014-01-01

    Pleural thickenings can be found in asbestos exposed patient's lung. Non-invasive diagnosis including CT imaging can detect aggressive malignant pleural mesothelioma in its early stage. In order to create a quantitative documentation of automatic detected pleural thickenings over time, the differences in volume and thickness of the detected thickenings have to be calculated. Physicians usually estimate the change of each thickening via visual comparison which provides neither quantitative nor qualitative measures. In this work, automatic spatiotemporal matching techniques of the detected pleural thickenings at two points of time based on the semi-automatic registration have been developed, implemented, and tested so that the same thickening can be compared fully automatically. As result, the application of the mapping technique using the principal components analysis turns out to be advantageous than the feature-based mapping using centroid and mean Hounsfield Units of each thickening, since the resulting sensitivity was improved to 98.46% from 42.19%, while the accuracy of feature-based mapping is only slightly higher (84.38% to 76.19%).

  1. A repeated-measures analysis of the effects of soft tissues on wrist range of motion in the extant phylogenetic bracket of dinosaurs: Implications for the functional origins of an automatic wrist folding mechanism in Crocodilia.

    PubMed

    Hutson, Joel David; Hutson, Kelda Nadine

    2014-07-01

    A recent study hypothesized that avian-like wrist folding in quadrupedal dinosaurs could have aided their distinctive style of locomotion with semi-pronated and therefore medially facing palms. However, soft tissues that automatically guide avian wrist folding rarely fossilize, and automatic wrist folding of unknown function in extant crocodilians has not been used to test this hypothesis. Therefore, an investigation of the relative contributions of soft tissues to wrist range of motion (ROM) in the extant phylogenetic bracket of dinosaurs, and the quadrupedal function of crocodilian wrist folding, could inform these questions. Here, we repeatedly measured wrist ROM in degrees through fully fleshed, skinned, minus muscles/tendons, minus ligaments, and skeletonized stages in the American alligator Alligator mississippiensis and the ostrich Struthio camelus. The effects of dissection treatment and observer were statistically significant for alligator wrist folding and ostrich wrist flexion, but not ostrich wrist folding. Final skeletonized wrist folding ROM was higher than (ostrich) or equivalent to (alligator) initial fully fleshed ROM, while final ROM was lower than initial ROM for ostrich wrist flexion. These findings suggest that, unlike the hinge/ball and socket-type elbow and shoulder joints in these archosaurs, ROM within gliding/planar diarthrotic joints is more restricted to the extent of articular surfaces. The alligator data indicate that the crocodilian wrist mechanism functions to automatically lock their semi-pronated palms into a rigid column, which supports the hypothesis that this palmar orientation necessitated soft tissue stiffening mechanisms in certain dinosaurs, although ROM-restricted articulations argue against the presence of an extensive automatic mechanism. Anat Rec, 297:1228-1249, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.

  2. Fully automated urban traffic system

    NASA Technical Reports Server (NTRS)

    Dobrotin, B. M.; Hansen, G. R.; Peng, T. K. C.; Rennels, D. A.

    1977-01-01

    The replacement of the driver with an automatic system which could perform the functions of guiding and routing a vehicle with a human's capability of responding to changing traffic demands was discussed. The problem was divided into four technological areas; guidance, routing, computing, and communications. It was determined that the latter three areas being developed independent of any need for fully automated urban traffic. A guidance system that would meet system requirements was not being developed but was technically feasible.

  3. Building Extraction from Remote Sensing Data Using Fully Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Bittner, K.; Cui, S.; Reinartz, P.

    2017-05-01

    Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.

  4. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  5. Automated object-based classification of topography from SRTM data

    NASA Astrophysics Data System (ADS)

    Drăguţ, Lucian; Eisank, Clemens

    2012-03-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.

  6. New York State Thruway Authority automatic vehicle classification (AVC) : research report.

    DOT National Transportation Integrated Search

    2008-03-31

    In December 2007, the N.Y.S. Thruway Authority (Thruway) concluded a Federal : funded research effort to study technology and develop a design for retrofitting : devices required in implementing a fully automated vehicle classification system i...

  7. Development of a Global Agricultural Hotspot Detection and Early Warning System

    NASA Astrophysics Data System (ADS)

    Lemoine, G.; Rembold, F.; Urbano, F.; Csak, G.

    2015-12-01

    The number of web based platforms for crop monitoring has grown rapidly over the last years and anomaly maps and time profiles of remote sensing derived indicators can be accessed online thanks to a number of web based portals. However, while these systems make available a large amount of crop monitoring data to the agriculture and food security analysts, there is no global platform which provides agricultural production hotspot warning in a highly automatic and timely manner. Therefore a web based system providing timely warning evidence as maps and short narratives is currently under development by the Joint Research Centre. The system (called "HotSpot Detection System of Agriculture Production Anomalies", HSDS) will focus on water limited agricultural systems worldwide. The automatic analysis of relevant meteorological and vegetation indicators at selected administrative units (Gaul 1 level) will trigger warning messages for the areas where anomalous conditions are observed. The level of warning (ranging from "watch" to "alert") will depend on the nature and number of indicators for which an anomaly is detected. Information regarding the extent of the agricultural areas concerned by the anomaly and the progress of the agricultural season will complement the warning label. In addition, we are testing supplementary detailed information from other sources for the areas triggering a warning. These regard the automatic web-based and food security-tailored analysis of media (using the JRC Media Monitor semantic search engine) and the automatic detection of active crop area using Sentinel 1, upcoming Sentinel-2 and Landsat 8 imagery processed in Google Earth Engine. The basic processing will be fully automated and updated every 10 days exploiting low resolution rainfall estimates and satellite vegetation indices. Maps, trend graphs and statistics accompanied by short narratives edited by a team of crop monitoring experts, will be made available on the website on a monthly basis.

  8. Real-time automatic registration in optical surgical navigation

    NASA Astrophysics Data System (ADS)

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming

    2016-05-01

    An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.

  9. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  10. Automatic weld torch guidance control system

    NASA Technical Reports Server (NTRS)

    Smaith, H. E.; Wall, W. A.; Burns, M. R., Jr.

    1982-01-01

    A highly reliable, fully digital, closed circuit television optical, type automatic weld seam tracking control system was developed. This automatic tracking equipment is used to reduce weld tooling costs and increase overall automatic welding reliability. The system utilizes a charge injection device digital camera which as 60,512 inidividual pixels as the light sensing elements. Through conventional scanning means, each pixel in the focal plane is sequentially scanned, the light level signal digitized, and an 8-bit word transmitted to scratch pad memory. From memory, the microprocessor performs an analysis of the digital signal and computes the tracking error. Lastly, the corrective signal is transmitted to a cross seam actuator digital drive motor controller to complete the closed loop, feedback, tracking system. This weld seam tracking control system is capable of a tracking accuracy of + or - 0.2 mm, or better. As configured, the system is applicable to square butt, V-groove, and lap joint weldments.

  11. Automatic Neural Processing of Disorder-Related Stimuli in Social Anxiety Disorder: Faces and More

    PubMed Central

    Schulz, Claudia; Mothes-Lasch, Martin; Straube, Thomas

    2013-01-01

    It has been proposed that social anxiety disorder (SAD) is associated with automatic information processing biases resulting in hypersensitivity to signals of social threat such as negative facial expressions. However, the nature and extent of automatic processes in SAD on the behavioral and neural level is not entirely clear yet. The present review summarizes neuroscientific findings on automatic processing of facial threat but also other disorder-related stimuli such as emotional prosody or negative words in SAD. We review initial evidence for automatic activation of the amygdala, insula, and sensory cortices as well as for automatic early electrophysiological components. However, findings vary depending on tasks, stimuli, and neuroscientific methods. Only few studies set out to examine automatic neural processes directly and systematic attempts are as yet lacking. We suggest that future studies should: (1) use different stimulus modalities, (2) examine different emotional expressions, (3) compare findings in SAD with other anxiety disorders, (4) use more sophisticated experimental designs to investigate features of automaticity systematically, and (5) combine different neuroscientific methods (such as functional neuroimaging and electrophysiology). Finally, the understanding of neural automatic processes could also provide hints for therapeutic approaches. PMID:23745116

  12. FALCON: A distributed scheduler for MIMD architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimshaw, A.S.; Vivas, V.E. Jr.

    1991-01-01

    This paper describes FALCON (Fully Automatic Load COordinator for Networks), the scheduler for the Mentat parallel processing system. FALCON has a modular structure and is designed for systems that use a task scheduling mechanism. FALCON is distributed, stable, supports system heterogeneities, and employs a sender-initiated adaptive load sharing policy with static task assignment. FALCON is parameterizable and is implemented in Mentat, a working distributed system. We present the design and implementation of FALCON as well as a brief introduction to those features of the Mentat run-time system that influence FALCON. Performance measures under different scheduler configurations are also presented andmore » analyzed with respect to the system parameters. 36 refs., 8 figs.« less

  13. 3D ultrasound-based patient positioning for radiotherapy

    NASA Astrophysics Data System (ADS)

    Wang, Michael H.; Rohling, Robert N.; Archip, Neculai; Clark, Brenda G.

    2006-03-01

    A new 3D ultrasound-based patient positioning system for target localisation during radiotherapy is described. Our system incorporates the use of tracked 3D ultrasound scans of the target anatomy acquired using a dedicated 3D ultrasound probe during both the simulation and treatment sessions, fully automatic 3D ultrasound-toultrasound registration, and OPTOTRAK IRLEDs for registering simulation CT to ultrasound data. The accuracy of the entire radiotherapy treatment process resulting from the use of our system, from simulation to the delivery of radiation, has been validated on a phantom. The overall positioning error is less than 5mm, which includes errors from estimation of the irradiated region location in the phantom.

  14. Reverse osmosis water purification system

    NASA Technical Reports Server (NTRS)

    Ahlstrom, H. G.; Hames, P. S.; Menninger, F. J.

    1986-01-01

    A reverse osmosis water purification system, which uses a programmable controller (PC) as the control system, was designed and built to maintain the cleanliness and level of water for various systems of a 64-m antenna. The installation operates with other equipment of the antenna at the Goldstone Deep Space Communication Complex. The reverse osmosis system was designed to be fully automatic; with the PC, many complex sequential and timed logic networks were easily implemented and are modified. The PC monitors water levels, pressures, flows, control panel requests, and set points on analog meters; with this information various processes are initiated, monitored, modified, halted, or eliminated as required by the equipment being supplied pure water.

  15. Digital Processing Of Young's Fringes In Speckle Photography

    NASA Astrophysics Data System (ADS)

    Chen, D. J.; Chiang, F. P.

    1989-01-01

    A new technique for fully automatic diffraction fringe measurement in point-wise speckle photograph analysis is presented in this paper. The fringe orientation and spacing are initially estimated with the help of 1-D FFT. A 2-D convolution filter is then applied to enhance the estimated image . High signal-to-noise rate (SNR) fringe pattern is achieved which makes it feasible for precise determination of the displacement components. The halo-effect is also optimally eliminated in a new way. With the computation time compared favorably with those of 2-D autocorrelation method and the iterative 2-D FFT method. High reliability and accurate determination of displacement components are achieved over a wide range of fringe density.

  16. Automatic prediction of protein domains from sequence information using a hybrid learning system.

    PubMed

    Nagarajan, Niranjan; Yona, Golan

    2004-06-12

    We describe a novel method for detecting the domain structure of a protein from sequence information alone. The method is based on analyzing multiple sequence alignments that are derived from a database search. Multiple measures are defined to quantify the domain information content of each position along the sequence and are combined into a single predictor using a neural network. The output is further smoothed and post-processed using a probabilistic model to predict the most likely transition positions between domains. The method was assessed using the domain definitions in SCOP and CATH for proteins of known structure and was compared with several other existing methods. Our method performs well both in terms of accuracy and sensitivity. It improves significantly over the best methods available, even some of the semi-manual ones, while being fully automatic. Our method can also be used to suggest and verify domain partitions based on structural data. A few examples of predicted domain definitions and alternative partitions, as suggested by our method, are also discussed. An online domain-prediction server is available at http://biozon.org/tools/domains/

  17. An Auto-Tuning PI Control System for an Open-Circuit Low-Speed Wind Tunnel Designed for Greenhouse Technology.

    PubMed

    Espinoza, Karlos; Valera, Diego L; Torres, José A; López, Alejandro; Molina-Aiz, Francisco D

    2015-08-12

    Wind tunnels are a key experimental tool for the analysis of airflow parameters in many fields of application. Despite their great potential impact on agricultural research, few contributions have dealt with the development of automatic control systems for wind tunnels in the field of greenhouse technology. The objective of this paper is to present an automatic control system that provides precision and speed of measurement, as well as efficient data processing in low-speed wind tunnel experiments for greenhouse engineering applications. The system is based on an algorithm that identifies the system model and calculates the optimum PI controller. The validation of the system was performed on a cellulose evaporative cooling pad and on insect-proof screens to assess its response to perturbations. The control system provided an accuracy of <0.06 m·s(-1) for airflow speed and <0.50 Pa for pressure drop, thus permitting the reproducibility and standardization of the tests. The proposed control system also incorporates a fully-integrated software unit that manages the tests in terms of airflow speed and pressure drop set points.

  18. Phantom study and accuracy evaluation of an image-to-world registration approach used with electro-magnetic tracking system for neurosurgery

    NASA Astrophysics Data System (ADS)

    Li, Senhu; Sarment, David

    2015-12-01

    Minimally invasive neurosurgery needs intraoperative imaging updates and high efficient image guide system to facilitate the procedure. An automatic image guided system utilized with a compact and mobile intraoperative CT imager was introduced in this work. A tracking frame that can be easily attached onto the commercially available skull clamp was designed. With known geometry of fiducial and tracking sensor arranged on this rigid frame that was fabricated through high precision 3D printing, not only was an accurate, fully automatic registration method developed in a simple and less-costly approach, but also it helped in estimating the errors from fiducial localization in image space through image processing, and in patient space through the calibration of tracking frame. Our phantom study shows the fiducial registration error as 0.348+/-0.028mm, comparing the manual registration error as 1.976+/-0.778mm. The system in this study provided a robust and accurate image-to-patient registration without interruption of routine surgical workflow and any user interactions involved through the neurosurgery.

  19. iGeoT v1.0: Automatic Parameter Estimation for Multicomponent Geothermometry, User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spycher, Nicolas; Finsterle, Stefan

    GeoT implements the multicomponent geothermometry method developed by Reed and Spycher [1984] into a stand-alone computer program to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization. This integrated geothermometry approach presents advantages over classical geothermometersmore » for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss. This manual contains installation instructions for iGeoT, and briefly describes the input formats needed to run iGeoT in Automatic or Expert Mode. An example is also provided to demonstrate the use of iGeoT.« less

  20. Clustering of color map pixels: an interactive approach

    NASA Astrophysics Data System (ADS)

    Moon, Yiu Sang; Luk, Franklin T.; Yuen, K. N.; Yeung, Hoi Wo

    2003-12-01

    The demand for digital maps continues to arise as mobile electronic devices become more popular nowadays. Instead of creating the entire map from void, we may convert a scanned paper map into a digital one. Color clustering is the very first step of the conversion process. Currently, most of the existing clustering algorithms are fully automatic. They are fast and efficient but may not work well in map conversion because of the numerous ambiguous issues associated with printed maps. Here we introduce two interactive approaches for color clustering on the map: color clustering with pre-calculated index colors (PCIC) and color clustering with pre-calculated color ranges (PCCR). We also introduce a memory model that could enhance and integrate different image processing techniques for fine-tuning the clustering results. Problems and examples of the algorithms are discussed in the paper.

  1. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.

  2. Automatic brain caudate nuclei segmentation and classification in diagnostic of Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Igual, Laura; Soliva, Joan Carles; Escalera, Sergio; Gimeno, Roger; Vilarroya, Oscar; Radeva, Petia

    2012-12-01

    We present a fully automatic diagnostic imaging test for Attention-Deficit/Hyperactivity Disorder diagnosis assistance based on previously found evidences of caudate nucleus volumetric abnormalities. The proposed method consists of different steps: a new automatic method for external and internal segmentation of caudate based on Machine Learning methodologies; the definition of a set of new volume relation features, 3D Dissociated Dipoles, used for caudate representation and classification. We separately validate the contributions using real data from a pediatric population and show precise internal caudate segmentation and discrimination power of the diagnostic test, showing significant performance improvements in comparison to other state-of-the-art methods. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Automatic three-dimensional measurement of large-scale structure based on vision metrology.

    PubMed

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.

  4. Machine for Automatic Bacteriological Pour Plate Preparation

    PubMed Central

    Sharpe, A. N.; Biggs, D. R.; Oliver, R. J.

    1972-01-01

    A fully automatic system for preparing poured plates for bacteriological analyses has been constructed and tested. The machine can make decimal dilutions of bacterial suspensions, dispense measured amounts into petri dishes, add molten agar, mix the dish contents, and label the dishes with sample and dilution numbers at the rate of 2,000 dishes per 8-hr day. In addition, the machine can be programmed to select different media so that plates for different types of bacteriological analysis may be made automatically from the same sample. The machine uses only the components of the media and sterile polystyrene petri dishes; requirements for all other materials, such as sterile pipettes and capped bottles of diluents and agar, are eliminated. Images PMID:4560475

  5. Automatic extraction of road features in urban environments using dense ALS data

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Truong-Hong, Linh; Riveiro, Belén; Laefer, Debra

    2018-02-01

    This paper describes a methodology that automatically extracts semantic information from urban ALS data for urban parameterization and road network definition. First, building façades are segmented from the ground surface by combining knowledge-based information with both voxel and raster data. Next, heuristic rules and unsupervised learning are applied to the ground surface data to distinguish sidewalk and pavement points as a means for curb detection. Then radiometric information was employed for road marking extraction. Using high-density ALS data from Dublin, Ireland, this fully automatic workflow was able to generate a F-score close to 95% for pavement and sidewalk identification with a resolution of 20 cm and better than 80% for road marking detection.

  6. Automatic detection of sleep macrostructure based on a sensorized T-shirt.

    PubMed

    Bianchi, Anna M; Mendez, Martin O

    2010-01-01

    In the present work we apply a fully automatic procedure to the analysis of signal coming from a sensorized T-shit, worn during the night, for sleep evaluation. The goodness and reliability of the signals recorded trough the T-shirt was previously tested, while the employed algorithms for feature extraction and sleep classification were previously developed on standard ECG recordings and the obtained classification was compared to the standard clinical practice based on polysomnography (PSG). In the present work we combined T-shirt recordings and automatic classification and could obtain reliable sleep profiles, i.e. the sleep classification in WAKE, REM (rapid eye movement) and NREM stages, based on heart rate variability (HRV), respiration and movement signals.

  7. The calculation of aircraft collision probabilities

    DOT National Transportation Integrated Search

    1971-10-01

    The basic limitation of, air traffic compression, from the safety point of view, is the increased risk of collision due to reduced separations. In order to evolve new procedures, and eventually a fully, automatic system, it is desirable to have a mea...

  8. Robot-Based High-Throughput Engineering of Alcoholic Polymer: Fullerene Nanoparticle Inks for an Eco-Friendly Processing of Organic Solar Cells.

    PubMed

    Xie, Chen; Tang, Xiaofeng; Berlinghof, Marvin; Langner, Stefan; Chen, Shi; Späth, Andreas; Li, Ning; Fink, Rainer H; Unruh, Tobias; Brabec, Christoph J

    2018-06-27

    Development of high-quality organic nanoparticle inks is a significant scientific challenge for the industrial production of solution-processed organic photovoltaics (OPVs) with eco-friendly processing methods. In this work, we demonstrate a novel, robot-based, high-throughput procedure performing automatic poly(3-hexylthio-phene-2,5-diyl) and indene-C 60 bisadduct nanoparticle ink synthesis in nontoxic alcohols. A novel methodology to prepare particle dispersions for fully functional OPVs by manipulating the particle size and solvent system was studied in detail. The ethanol dispersion with a particle diameter of around 80-100 nm exhibits reduced degradation, yielding a power conversion efficiency of 4.52%, which is the highest performance reported so far for water/alcohol-processed OPV devices. By successfully deploying the high-throughput robot-based approach for an organic nanoparticle ink preparation, we believe that the findings demonstrated in this work will trigger more research interest and effort on eco-friendly industrial production of OPVs.

  9. Mechanisms of masked priming: a meta-analysis.

    PubMed

    Van den Bussche, Eva; Van den Noortgate, Wim; Reynvoet, Bert

    2009-05-01

    The extent to which unconscious information can influence behavior has been a topic of considerable debate throughout the history of psychology. A frequently used method for studying subliminal processing is the masked priming paradigm. The authors focused on studies in which this paradigm was used. Their aim was twofold: first, to assess the magnitude of subliminal priming across the literature and to determine whether subliminal primes are processed semantically, and second, to examine potential moderators of priming effects. The authors found significant priming in their analyses, indicating that unconsciously presented information can influence behavior. Furthermore, priming was observed under circumstances in which a nonsemantic interpretation could not fully explain the effects, suggesting that subliminally presented information can be processed semantically. Nonetheless, the nonsemantic processing of primes is enhanced and priming effects are boosted when the experimental context allows the formation of automatic stimulus-response mappings. This quantitative review also revealed several moderators that influence the strength of priming. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  10. Tuned grid generation with ICEM CFD

    NASA Technical Reports Server (NTRS)

    Wulf, Armin; Akdag, Vedat

    1995-01-01

    ICEM CFD is a CAD based grid generation package that supports multiblock structured, unstructured tetrahedral and unstructured hexahedral grids. Major development efforts have been spent to extend ICEM CFD's multiblock structured and hexahedral unstructured grid generation capabilities. The modules added are: a parametric grid generation module and a semi-automatic hexahedral grid generation module. A fully automatic version of the hexahedral grid generation module for around a set of predefined objects in rectilinear enclosures has been developed. These modules will be presented and the procedures used will be described, and examples will be discussed.

  11. Automatic laser beam alignment using blob detection for an environment monitoring spectroscopy

    NASA Astrophysics Data System (ADS)

    Khidir, Jarjees; Chen, Youhua; Anderson, Gary

    2013-05-01

    This paper describes a fully automated system to align an infra-red laser beam with a small retro-reflector over a wide range of distances. The component development and test were especially used for an open-path spectrometer gas detection system. Using blob detection under OpenCV library, an automatic alignment algorithm was designed to achieve fast and accurate target detection in a complex background environment. Test results are presented to show that the proposed algorithm has been successfully applied to various target distances and environment conditions.

  12. Automatic recloser circuit breaker integrated with GSM technology for power system notification

    NASA Astrophysics Data System (ADS)

    Lada, M. Y.; Khiar, M. S. A.; Ghani, S. A.; Nawawi, M. R. M.; Rahim, N. H.; Sinar, L. O. M.

    2015-05-01

    Lightning is one type of transient faults that usually cause the circuit breaker in the distribution board trip due to overload current detection. The instant tripping condition in the circuit breakers clears the fault in the system. Unfortunately most circuit breakers system is manually operated. The power line will be effectively re-energized after the clearing fault process is finished. Auto-reclose circuit is used on the transmission line to carry out the duty of supplying quality electrical power to customers. In this project, an automatic reclose circuit breaker for low voltage usage is designed. The product description is the Auto Reclose Circuit Breaker (ARCB) will trip if the current sensor detects high current which exceeds the rated current for the miniature circuit breaker (MCB) used. Then the fault condition will be cleared automatically and return the power line to normal condition. The Global System for Mobile Communication (GSM) system will send SMS to the person in charge if the tripping occurs. If the over current occurs in three times, the system will fully trip (open circuit) and at the same time will send an SMS to the person in charge. In this project a 1 A is set as the rated current and any current exceeding a 1 A will cause the system to trip or interrupted. This system also provides an additional notification for user such as the emergency light and warning system.

  13. Constraint factor graph cut-based active contour method for automated cellular image segmentation in RNAi screening.

    PubMed

    Chen, C; Li, H; Zhou, X; Wong, S T C

    2008-05-01

    Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.

  14. Advances of FishNet towards a fully automatic monitoring system for fish migration

    NASA Astrophysics Data System (ADS)

    Kratzert, Frederik; Mader, Helmut

    2017-04-01

    Restoring the continuum of river networks, affected by anthropogenic constructions, is one of the main objectives of the Water Framework Directive. Regarding fish migration, fish passes are a widely used measure. Often the functionality of these fish passes needs to be assessed by monitoring. Over the last years, we developed a new semi-automatic monitoring system (FishCam) which allows the contact free observation of fish migration in fish passes through videos. The system consists of a detection tunnel, equipped with a camera, a motion sensor and artificial light sources, as well as a software (FishNet), which helps to analyze the video data. In its latest version, the software is capable of detecting and tracking objects in the videos as well as classifying them into "fish" and "no-fish" objects. This allows filtering out the videos containing at least one fish (approx. 5 % of all grabbed videos) and reduces the manual labor to the analysis of these videos. In this state the entire system has already been used in over 20 different fish passes across Austria for a total of over 140 months of monitoring resulting in more than 1.4 million analyzed videos. As a next step towards a fully automatic monitoring system, a key feature is the automatized classification of the detected fish into their species, which is still an unsolved task in a fully automatic monitoring environment. Recent advances in the field of machine learning, especially image classification with deep convolutional neural networks, sound promising in order to solve this problem. In this study, different approaches for the fish species classification are tested. Besides an image-only based classification approach using deep convolutional neural networks, various methods that combine the power of convolutional neural networks as image descriptors with additional features, such as the fish length and the time of appearance, are explored. To facilitate the development and testing phase of this approach, a subset of six fish species of Austrian rivers and streams is considered in this study. All scripts and the data to reproduce the results of this study will be made publicly available on GitHub* at the beginning of the EGU2017 General Assembly. * https://github.com/kratzert/EGU2017_public/

  15. Active Learning for Automatic Audio Processing of Unwritten Languages (ALAPUL)

    DTIC Science & Technology

    2016-07-01

    AFRL-RH-WP-TR-2016-0074 ACTIVE LEARNING FOR AUTOMATIC AUDIO PROCESSING OF UNWRITTEN LANGUAGES (ALAPUL) Dimitra Vergyri Andreas Kathol Wen Wang...June 2015-July 2016 4. TITLE AND SUBTITLE Active Learning for Automatic Audio Processing of Unwritten Languages (ALAPUL) 5a. CONTRACT NUMBER...5430, 27 October 2016 1. SUMMARY The goal of the project was to investigate development of an automatic spoken language processing (ASLP) system

  16. Using Dual-Task Methodology to Dissociate Automatic from Nonautomatic Processes Involved in Artificial Grammar Learning

    ERIC Educational Resources Information Center

    Hendricks, Michelle A.; Conway, Christopher M.; Kellogg, Ronald T.

    2013-01-01

    Previous studies have suggested that both automatic and intentional processes contribute to the learning of grammar and fragment knowledge in artificial grammar learning (AGL) tasks. To explore the relative contribution of automatic and intentional processes to knowledge gained in AGL, we utilized dual-task methodology to dissociate automatic and…

  17. Evolution of the ATLAS Nightly Build System

    NASA Astrophysics Data System (ADS)

    Undrus, A.

    2012-12-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  18. Phase II modification of the Water Availability Tool for Environmental Resources (WATER) for Kentucky: The sinkhole-drainage process, point-and-click basin delineation, and results of karst test-basin simulations

    USGS Publications Warehouse

    Taylor, Charles J.; Williamson, Tanja N.; Newson, Jeremy K.; Ulery, Randy L.; Nelson, Hugh L.; Cinotto, Peter J.

    2012-01-01

    This report describes Phase II modifications made to the Water Availability Tool for Environmental Resources (WATER), which applies the process-based TOPMODEL approach to simulate or predict stream discharge in surface basins in the Commonwealth of Kentucky. The previous (Phase I) version of WATER did not provide a means of identifying sinkhole catchments or accounting for the effects of karst (internal) drainage in a TOPMODEL-simulated basin. In the Phase II version of WATER, sinkhole catchments are automatically identified and delineated as internally drained subbasins, and a modified TOPMODEL approach (called the sinkhole drainage process, or SDP-TOPMODEL) is applied that calculates mean daily discharges for the basin based on summed area-weighted contributions from sinkhole drain-age (SD) areas and non-karstic topographically drained (TD) areas. Results obtained using the SDP-TOPMODEL approach were evaluated for 12 karst test basins located in each of the major karst terrains in Kentucky. Visual comparison of simulated hydrographs and flow-duration curves, along with statistical measures applied to the simulated discharge data (bias, correlation, root mean square error, and Nash-Sutcliffe efficiency coefficients), indicate that the SDPOPMODEL approach provides acceptably accurate estimates of discharge for most flow conditions and typically provides more accurate simulation of stream discharge in karstic basins compared to the standard TOPMODEL approach. Additional programming modifications made to the Phase II version of WATER included implementation of a point-and-click graphical user interface (GUI), which fully automates the delineation of simulation-basin boundaries and improves the speed of input-data processing. The Phase II version of WATER enables the user to select a pour point anywhere on a stream reach of interest, and the program will automatically delineate all upstream areas that contribute drainage to that point. This capability enables automatic delineation of a simulation basin of any size (area) and having any level of stream-network complexity. WATER then automatically identifies the presence of sinkholes catchments within the simulation basin boundaries; extracts and compiles the necessary climatic, topographic, and basin characteristics datasets; and runs the SDP-TOPMODEL approach to estimate daily mean discharges (streamflow).

  19. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI.

    PubMed

    Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid

    2016-05-01

    Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalizing neural network

    PubMed Central

    Dietz, Hans Peter; D’hooge, Jan; Barratt, Dean; Deprest, Jan

    2018-01-01

    Abstract. Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams’ index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach. PMID:29340289

  1. Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalizing neural network.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Sindhwani, Nikhil; Dietz, Hans Peter; D'hooge, Jan; Barratt, Dean; Deprest, Jan; Vercauteren, Tom

    2018-04-01

    Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams' index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach.

  2. Controlled versus automatic processes: which is dominant to safety? The moderating effect of inhibitory control.

    PubMed

    Xu, Yaoshan; Li, Yongjuan; Ding, Weidong; Lu, Fan

    2014-01-01

    This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT) reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end.

  3. Controlled versus Automatic Processes: Which Is Dominant to Safety? The Moderating Effect of Inhibitory Control

    PubMed Central

    Xu, Yaoshan; Li, Yongjuan; Ding, Weidong; Lu, Fan

    2014-01-01

    This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT) reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end. PMID:24520338

  4. A detailed comparison of analysis processes for MCC-IMS data in disease classification—Automated methods can replace manual peak annotations

    PubMed Central

    Horsch, Salome; Kopczynski, Dominik; Kuthe, Elias; Baumbach, Jörg Ingo; Rahmann, Sven

    2017-01-01

    Motivation Disease classification from molecular measurements typically requires an analysis pipeline from raw noisy measurements to final classification results. Multi capillary column—ion mobility spectrometry (MCC-IMS) is a promising technology for the detection of volatile organic compounds in the air of exhaled breath. From raw measurements, the peak regions representing the compounds have to be identified, quantified, and clustered across different experiments. Currently, several steps of this analysis process require manual intervention of human experts. Our goal is to identify a fully automatic pipeline that yields competitive disease classification results compared to an established but subjective and tedious semi-manual process. Method We combine a large number of modern methods for peak detection, peak clustering, and multivariate classification into analysis pipelines for raw MCC-IMS data. We evaluate all combinations on three different real datasets in an unbiased cross-validation setting. We determine which specific algorithmic combinations lead to high AUC values in disease classifications across the different medical application scenarios. Results The best fully automated analysis process achieves even better classification results than the established manual process. The best algorithms for the three analysis steps are (i) SGLTR (Savitzky-Golay Laplace-operator filter thresholding regions) and LM (Local Maxima) for automated peak identification, (ii) EM clustering (Expectation Maximization) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) for the clustering step and (iii) RF (Random Forest) for multivariate classification. Thus, automated methods can replace the manual steps in the analysis process to enable an unbiased high throughput use of the technology. PMID:28910313

  5. Automatic Exposure Control Device for Digital Mammography

    DTIC Science & Technology

    2001-08-01

    developing innovative approaches for controlling DM exposures. These approaches entail using the digital detector and an artificial neural network to...of interest that determine the exposure parameters for the fully exposed image; and (2) to use an artificial neural network to select exposure

  6. Automatic Exposure Control Device for Digital Mammography

    DTIC Science & Technology

    2004-08-01

    developing innovative approaches for controlling DM exposures. These approaches entail using the digital detector and an artificial neural network to...of interest that determine the exposure parameters for the fully exposed image; and (2) to use an artificial neural network to select exposure

  7. Operation of the Australian Store.Synchrotron for macromolecular crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Grischa R.; Aragão, David; Mudie, Nathan J.

    2014-10-01

    The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The service automatically receives and archives raw diffraction data, related metadata and preliminary results of automated data-processing workflows. Data are able to be shared with collaborators and opened to the public. In the nine months since its deployment in August 2013, the service has handled over 22.4 TB of raw data (∼1.7 million diffraction images). Severalmore » real examples from the Australian crystallographic community are described that illustrate the advantages of the approach, which include real-time online data access and fully redundant, secure storage. Discoveries in biological sciences increasingly require multidisciplinary approaches. With this in mind, Store.Synchrotron has been developed as a component within a greater service that can combine data from other instruments at the Australian Synchrotron, as well as instruments at the Australian neutron source ANSTO. It is therefore envisaged that this will serve as a model implementation of raw data archiving and dissemination within the structural biology research community.« less

  8. Convolution neural networks for real-time needle detection and localization in 2D ultrasound.

    PubMed

    Mwikirize, Cosmas; Nosher, John L; Hacihaliloglu, Ilker

    2018-05-01

    We propose a framework for automatic and accurate detection of steeply inserted needles in 2D ultrasound data using convolution neural networks. We demonstrate its application in needle trajectory estimation and tip localization. Our approach consists of a unified network, comprising a fully convolutional network (FCN) and a fast region-based convolutional neural network (R-CNN). The FCN proposes candidate regions, which are then fed to a fast R-CNN for finer needle detection. We leverage a transfer learning paradigm, where the network weights are initialized by training with non-medical images, and fine-tuned with ex vivo ultrasound scans collected during insertion of a 17G epidural needle into freshly excised porcine and bovine tissue at depth settings up to 9 cm and [Formula: see text]-[Formula: see text] insertion angles. Needle detection results are used to accurately estimate needle trajectory from intensity invariant needle features and perform needle tip localization from an intensity search along the needle trajectory. Our needle detection model was trained and validated on 2500 ex vivo ultrasound scans. The detection system has a frame rate of 25 fps on a GPU and achieves 99.6% precision, 99.78% recall rate and an [Formula: see text] score of 0.99. Validation for needle localization was performed on 400 scans collected using a different imaging platform, over a bovine/porcine lumbosacral spine phantom. Shaft localization error of [Formula: see text], tip localization error of [Formula: see text] mm, and a total processing time of 0.58 s were achieved. The proposed method is fully automatic and provides robust needle localization results in challenging scanning conditions. The accurate and robust results coupled with real-time detection and sub-second total processing make the proposed method promising in applications for needle detection and localization during challenging minimally invasive ultrasound-guided procedures.

  9. Corral framework: Trustworthy and fully functional data intensive parallel astronomical pipelines

    NASA Astrophysics Data System (ADS)

    Cabral, J. B.; Sánchez, B.; Beroiz, M.; Domínguez, M.; Lares, M.; Gurovich, S.; Granitto, P.

    2017-07-01

    Data processing pipelines represent an important slice of the astronomical software library that include chains of processes that transform raw data into valuable information via data reduction and analysis. In this work we present Corral, a Python framework for astronomical pipeline generation. Corral features a Model-View-Controller design pattern on top of an SQL Relational Database capable of handling: custom data models; processing stages; and communication alerts, and also provides automatic quality and structural metrics based on unit testing. The Model-View-Controller provides concept separation between the user logic and the data models, delivering at the same time multi-processing and distributed computing capabilities. Corral represents an improvement over commonly found data processing pipelines in astronomysince the design pattern eases the programmer from dealing with processing flow and parallelization issues, allowing them to focus on the specific algorithms needed for the successive data transformations and at the same time provides a broad measure of quality over the created pipeline. Corral and working examples of pipelines that use it are available to the community at https://github.com/toros-astro.

  10. Fully automatic and reference-marker-free image stitching method for full-spine and full-leg imaging with computed radiography

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Foos, David H.; Doran, James; Rogers, Michael K.

    2004-05-01

    Full-leg and full-spine imaging with standard computed radiography (CR) systems requires several cassettes/storage phosphor screens to be placed in a staggered arrangement and exposed simultaneously to achieve an increased imaging area. A method has been developed that can automatically and accurately stitch the acquired sub-images without relying on any external reference markers. It can detect and correct the order, orientation, and overlap arrangement of the subimages for stitching. The automatic determination of the order, orientation, and overlap arrangement of the sub-images consists of (1) constructing a hypothesis list that includes all cassette/screen arrangements, (2) refining hypotheses based on a set of rules derived from imaging physics, (3) correlating each consecutive sub-image pair in each hypothesis and establishing an overall figure-of-merit, (4) selecting the hypothesis of maximum figure-of-merit. The stitching process requires the CR reader to over scan each CR screen so that the screen edges are completely visible in the acquired sub-images. The rotational displacement and vertical displacement between two consecutive sub-images are calculated by matching the orientation and location of the screen edge in the front image and its corresponding shadow in the back image. The horizontal displacement is estimated by maximizing the correlation function between the two image sections in the overlap region. Accordingly, the two images are stitched together. This process is repeated for the newly stitched composite image and the next consecutive sub-image until a full-image composite is created. The method has been evaluated in both phantom experiments and clinical studies. The standard deviation of image misregistration is below one image pixel.

  11. Automatic alkaloid removal system.

    PubMed

    Yahaya, Muhammad Rizuwan; Hj Razali, Mohd Hudzari; Abu Bakar, Che Abdullah; Ismail, Wan Ishak Wan; Muda, Wan Musa Wan; Mat, Nashriyah; Zakaria, Abd

    2014-01-01

    This alkaloid automated removal machine was developed at Instrumentation Laboratory, Universiti Sultan Zainal Abidin Malaysia that purposely for removing the alkaloid toxicity from Dioscorea hispida (DH) tuber. It is a poisonous plant where scientific study has shown that its tubers contain toxic alkaloid constituents, dioscorine. The tubers can only be consumed after it poisonous is removed. In this experiment, the tubers are needed to blend as powder form before inserting into machine basket. The user is need to push the START button on machine controller for switching the water pump ON by then creating turbulence wave of water in machine tank. The water will stop automatically by triggering the outlet solenoid valve. The powders of tubers are washed for 10 minutes while 1 liter of contaminated water due toxin mixture is flowing out. At this time, the controller will automatically triggered inlet solenoid valve and the new water will flow in machine tank until achieve the desire level that which determined by ultra sonic sensor. This process will repeated for 7 h and the positive result is achieved and shows it significant according to the several parameters of biological character ofpH, temperature, dissolve oxygen, turbidity, conductivity and fish survival rate or time. From that parameter, it also shows the positive result which is near or same with control water and assuming was made that the toxin is fully removed when the pH of DH powder is near with control water. For control water, the pH is about 5.3 while water from this experiment process is 6.0 and before run the machine the pH of contaminated water is about 3.8 which are too acid. This automated machine can save time for removing toxicity from DH compared with a traditional method while less observation of the user.

  12. Automatic processing influences free recall: converging evidence from the process dissociation procedure and remember-know judgments.

    PubMed

    McCabe, David P; Roediger, Henry L; Karpicke, Jeffrey D

    2011-04-01

    Dual-process theories of retrieval suggest that controlled and automatic processing contribute to memory performance. Free recall tests are often considered pure measures of recollection, assessing only the controlled process. We report two experiments demonstrating that automatic processes also influence free recall. Experiment 1 used inclusion and exclusion tasks to estimate recollection and automaticity in free recall, adopting a new variant of the process dissociation procedure. Dividing attention during study selectively reduced the recollection estimate but did not affect the automatic component. In Experiment 2, we replicated the results of Experiment 1, and subjects additionally reported remember-know-guess judgments during recall in the inclusion condition. In the latter task, dividing attention during study reduced remember judgments for studied items, but know responses were unaffected. Results from both methods indicated that free recall is partly driven by automatic processes. Thus, we conclude that retrieval in free recall tests is not driven solely by conscious recollection (or remembering) but also by automatic influences of the same sort believed to drive priming on implicit memory tests. Sometimes items come to mind without volition in free recall.

  13. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation.

    PubMed

    Kamnitsas, Konstantinos; Ledig, Christian; Newcombe, Virginia F J; Simpson, Joanna P; Kane, Andrew D; Menon, David K; Rueckert, Daniel; Glocker, Ben

    2017-02-01

    We propose a dual pathway, 11-layers deep, three-dimensional Convolutional Neural Network for the challenging task of brain lesion segmentation. The devised architecture is the result of an in-depth analysis of the limitations of current networks proposed for similar applications. To overcome the computational burden of processing 3D medical scans, we have devised an efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data. Further, we analyze the development of deeper, thus more discriminative 3D CNNs. In order to incorporate both local and larger contextual information, we employ a dual pathway architecture that processes the input images at multiple scales simultaneously. For post-processing of the network's soft segmentation, we use a 3D fully connected Conditional Random Field which effectively removes false positives. Our pipeline is extensively evaluated on three challenging tasks of lesion segmentation in multi-channel MRI patient data with traumatic brain injuries, brain tumours, and ischemic stroke. We improve on the state-of-the-art for all three applications, with top ranking performance on the public benchmarks BRATS 2015 and ISLES 2015. Our method is computationally efficient, which allows its adoption in a variety of research and clinical settings. The source code of our implementation is made publicly available. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  14. A Modular Hierarchical Approach to 3D Electron Microscopy Image Segmentation

    PubMed Central

    Liu, Ting; Jones, Cory; Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2014-01-01

    The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy. PMID:24491638

  15. Automatic intraaortic balloon pump timing using an intrabeat dicrotic notch prediction algorithm.

    PubMed

    Schreuder, Jan J; Castiglioni, Alessandro; Donelli, Andrea; Maisano, Francesco; Jansen, Jos R C; Hanania, Ramzi; Hanlon, Pat; Bovelander, Jan; Alfieri, Ottavio

    2005-03-01

    The efficacy of intraaortic balloon counterpulsation (IABP) during arrhythmic episodes is questionable. A novel algorithm for intrabeat prediction of the dicrotic notch was used for real time IABP inflation timing control. A windkessel model algorithm was used to calculate real-time aortic flow from aortic pressure. The dicrotic notch was predicted using a percentage of calculated peak flow. Automatic inflation timing was set at intrabeat predicted dicrotic notch and was combined with automatic IAB deflation. Prophylactic IABP was applied in 27 patients with low ejection fraction (< 35%) undergoing cardiac surgery. Analysis of IABP at a 1:4 ratio revealed that IAB inflation occurred at a mean of 0.6 +/- 5 ms from the dicrotic notch. In all patients accurate automatic timing at a 1:1 assist ratio was performed. Seventeen patients had episodes of severe arrhythmia, the novel IABP inflation algorithm accurately assisted 318 of 320 arrhythmic beats at a 1:1 ratio. The novel real-time intrabeat IABP inflation timing algorithm performed accurately in all patients during both regular rhythms and severe arrhythmia, allowing fully automatic intrabeat IABP timing.

  16. Fully automatic hp-adaptivity for acoustic and electromagnetic scattering in three dimensions

    NASA Astrophysics Data System (ADS)

    Kurtz, Jason Patrick

    We present an algorithm for fully automatic hp-adaptivity for finite element approximations of elliptic and Maxwell boundary value problems in three dimensions. The algorithm automatically generates a sequence of coarse grids, and a corresponding sequence of fine grids, such that the energy norm of the error decreases exponentially with respect to the number of degrees of freedom in either sequence. At each step, we employ a discrete optimization algorithm to determine the refinements for the current coarse grid such that the projection-based interpolation error for the current fine grid solution decreases with an optimal rate with respect to the number of degrees of freedom added by the refinement. The refinements are restricted only by the requirement that the resulting mesh is at most 1-irregular, but they may be anisotropic in both element size h and order of approximation p. While we cannot prove that our method converges at all, we present numerical evidence of exponential convergence for a diverse suite of model problems from acoustic and electromagnetic scattering. In particular we show that our method is well suited to the automatic resolution of exterior problems truncated by the introduction of a perfectly matched layer. To enable and accelerate the solution of these problems on commodity hardware, we include a detailed account of three critical aspects of our implementation, namely an efficient implementation of sum factorization, several efficient interfaces to the direct multi-frontal solver MUMPS, and some fast direct solvers for the computation of a sequence of nested projections.

  17. Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.

    2010-04-01

    The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.

  18. PANDA: a pipeline toolbox for analyzing brain diffusion images.

    PubMed

    Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang

    2013-01-01

    Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named "Pipeline for Analyzing braiN Diffusion imAges" (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.

  19. PANDA: a pipeline toolbox for analyzing brain diffusion images

    PubMed Central

    Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang

    2013-01-01

    Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named “Pipeline for Analyzing braiN Diffusion imAges” (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies. PMID:23439846

  20. Effortful versus automatic emotional processing in schizophrenia: Insights from a face-vignette task.

    PubMed

    Patrick, Regan E; Rastogi, Anuj; Christensen, Bruce K

    2015-01-01

    Adaptive emotional responding relies on dual automatic and effortful processing streams. Dual-stream models of schizophrenia (SCZ) posit a selective deficit in neural circuits that govern goal-directed, effortful processes versus reactive, automatic processes. This imbalance suggests that when patients are confronted with competing automatic and effortful emotional response cues, they will exhibit diminished effortful responding and intact, possibly elevated, automatic responding compared to controls. This prediction was evaluated using a modified version of the face-vignette task (FVT). Participants viewed emotional faces (automatic response cue) paired with vignettes (effortful response cue) that signalled a different emotion category and were instructed to discriminate the manifest emotion. Patients made less vignette and more face responses than controls. However, the relationship between group and FVT responding was moderated by IQ and reading comprehension ability. These results replicate and extend previous research and provide tentative support for abnormal conflict resolution between automatic and effortful emotional processing predicted by dual-stream models of SCZ.

  1. Automatic Abstraction in Planning

    NASA Technical Reports Server (NTRS)

    Christensen, J.

    1991-01-01

    Traditionally, abstraction in planning has been accomplished by either state abstraction or operator abstraction, neither of which has been fully automatic. We present a new method, predicate relaxation, for automatically performing state abstraction. PABLO, a nonlinear hierarchical planner, implements predicate relaxation. Theoretical, as well as empirical results are presented which demonstrate the potential advantages of using predicate relaxation in planning. We also present a new definition of hierarchical operators that allows us to guarantee a limited form of completeness. This new definition is shown to be, in some ways, more flexible than previous definitions of hierarchical operators. Finally, a Classical Truth Criterion is presented that is proven to be sound and complete for a planning formalism that is general enough to include most classical planning formalisms that are based on the STRIPS assumption.

  2. A fast and automatic mosaic method for high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  3. Solar-Powered Water Distillation

    NASA Technical Reports Server (NTRS)

    Menninger, F. J.; Elder, R. J.

    1985-01-01

    Solar-powered still produces pure water at rate of 6,000 gallons per year. Still fully automatic and gravity-fed. Only outside electric power is timer clock and solenoid-operated valve. Still saves $5,000 yearly in energy costs and pays for itself in 3 1/2 years.

  4. Prosody's Contribution to Fluency: An Examination of the Theory of Automatic Information Processing

    ERIC Educational Resources Information Center

    Schrauben, Julie E.

    2010-01-01

    LaBerge and Samuels' (1974) theory of automatic information processing in reading offers a model that explains how and where the processing of information occurs and the degree to which processing of information occurs. These processes are dependent upon two criteria: accurate word decoding and automatic word recognition. However, LaBerge and…

  5. AutoBayes Program Synthesis System Users Manual

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd

    2008-01-01

    Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.

  6. SU-E-J-117: Verification Method for the Detection Accuracy of Automatic Winston Lutz Test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, A; Chan, K; Fee, F

    2014-06-01

    Purpose: Winston Lutz test (WLT) has been a standard QA procedure performed prior to SRS treatment, to verify the mechanical iso-center setup accuracy upon different Gantry/Couch movements. Several detection algorithms exist,for analyzing the ball-radiation field alignment automatically. However, the accuracy of these algorithms have not been fully addressed. Here, we reveal the possible errors arise from each step in WLT, and verify the software detection accuracy with the Rectilinear Phantom Pointer (RLPP), a tool commonly used for aligning treatment plan coordinate with mechanical iso-center. Methods: WLT was performed with the radio-opaque ball mounted on a MIS and irradiated onto EDR2more » films. The films were scanned and processed with an in-house Matlab program for automatic iso-center detection. Tests were also performed to identify the errors arise from setup, film development and scanning process. The radioopaque ball was then mounted onto the RLPP, and offset laterally and longitudinally in 7 known positions ( 0, ±0.2, ±0.5, ±0.8 mm) manually for irradiations. The gantry and couch was set to zero degree for all irradiation. The same scanned images were processed repeatedly to check the repeatability of the software. Results: Miminal discrepancies (mean=0.05mm) were detected with 2 films overlapped and irradiated but developed separately. This reveals the error arise from film processor and scanner alone. Maximum setup errors were found to be around 0.2mm, by analyzing data collected from 10 irradiations over 2 months. For the known shift introduced using the RLPP, the results agree with the manual offset, and fit linearly (R{sup 2}>0.99) when plotted relative to the first ball with zero shift. Conclusion: We systematically reveal the possible errors arise from each step in WLT, and introduce a simple method to verify the detection accuracy of our in-house software using a clinically available tool.« less

  7. Dissociation between controlled and automatic processes in the behavioral variant of fronto-temporal dementia.

    PubMed

    Collette, Fabienne; Van der Linden, Martial; Salmon, Eric

    2010-01-01

    A decline of cognitive functioning affecting several cognitive domains was frequently reported in patients with frontotemporal dementia. We were interested in determining if these deficits can be interpreted as reflecting an impairment of controlled cognitive processes by using an assessment tool specifically developed to explore the distinction between automatic and controlled processes, namely the process dissociation procedure (PDP) developed by Jacoby. The PDP was applied to a word stem completion task to determine the contribution of automatic and controlled processes to episodic memory performance and was administered to a group of 12 patients with the behavioral variant of frontotemporal dementia (bv-FTD) and 20 control subjects (CS). Bv-FTD patients obtained a lower performance than CS for the estimates of controlled processes, but no group differences was observed for estimates of automatic processes. The between-groups comparison of the estimates of controlled and automatic processes showed a larger contribution of automatic processes to performance in bv-FTD, while a slightly more important contribution of controlled processes was observed in control subjects. These results are clearly indicative of an alteration of controlled memory processes in bv-FTD.

  8. Visual vs Fully Automatic Histogram-Based Assessment of Idiopathic Pulmonary Fibrosis (IPF) Progression Using Sequential Multidetector Computed Tomography (MDCT)

    PubMed Central

    Colombi, Davide; Dinkel, Julien; Weinheimer, Oliver; Obermayer, Berenike; Buzan, Teodora; Nabers, Diana; Bauer, Claudia; Oltmanns, Ute; Palmowski, Karin; Herth, Felix; Kauczor, Hans Ulrich; Sverzellati, Nicola

    2015-01-01

    Objectives To describe changes over time in extent of idiopathic pulmonary fibrosis (IPF) at multidetector computed tomography (MDCT) assessed by semi-quantitative visual scores (VSs) and fully automatic histogram-based quantitative evaluation and to test the relationship between these two methods of quantification. Methods Forty IPF patients (median age: 70 y, interquartile: 62-75 years; M:F, 33:7) that underwent 2 MDCT at different time points with a median interval of 13 months (interquartile: 10-17 months) were retrospectively evaluated. In-house software YACTA quantified automatically lung density histogram (10th-90th percentile in 5th percentile steps). Longitudinal changes in VSs and in the percentiles of attenuation histogram were obtained in 20 untreated patients and 20 patients treated with pirfenidone. Pearson correlation analysis was used to test the relationship between VSs and selected percentiles. Results In follow-up MDCT, visual overall extent of parenchymal abnormalities (OE) increased in median by 5 %/year (interquartile: 0 %/y; +11 %/y). Substantial difference was found between treated and untreated patients in HU changes of the 40th and of the 80th percentiles of density histogram. Correlation analysis between VSs and selected percentiles showed higher correlation between the changes (Δ) in OE and Δ 40th percentile (r=0.69; p<0.001) as compared to Δ 80th percentile (r=0.58; p<0.001); closer correlation was found between Δ ground-glass extent and Δ 40th percentile (r=0.66, p<0.001) as compared to Δ 80th percentile (r=0.47, p=0.002), while the Δ reticulations correlated better with the Δ 80th percentile (r=0.56, p<0.001) in comparison to Δ 40th percentile (r=0.43, p=0.003). Conclusions There is a relevant and fully automatically measurable difference at MDCT in VSs and in histogram analysis at one year follow-up of IPF patients, whether treated or untreated: Δ 40th percentile might reflect the change in overall extent of lung abnormalities, notably of ground-glass pattern; furthermore Δ 80th percentile might reveal the course of reticular opacities. PMID:26110421

  9. Automatic, Rapid Replanning of Satellite Operations for Space Situational Awareness (SSA)

    NASA Astrophysics Data System (ADS)

    Stottler, D.; Mahan, K.

    An important component of Space Situational Awareness (SSA) is knowledge of the status and tasking of blue forces (e.g. satellites and ground stations) and the rapid determination of the impacts of real or hypothetical changes and the ability to quickly replan based on those changes. For example, if an antenna goes down (either for benign reasons or from purposeful interference) determining which missions will be impacted is important. It is not simply the set of missions that were scheduled to utilize that antenna, because highly expert human schedulers will respond to the outage by intelligently replanning the real-time schedule. We have developed an automatic scheduling and deconfliction engine, called MIDAS (for Managed Intelligent Deconfliction And Scheduling) that interfaces to the current legacy system (ESD 2.7) which can perform this replanning function automatically. In addition to determining the impact of failed resources, MIDAS can also replan in response to a satellite under attack. In this situation, additional supports must be quickly scheduled and executed (while minimizing impacts to other missions). Because MIDAS is a fully automatic system, replacing a current human labor-intensive process, and provides very rapid turnaround (seconds) it can also be used by commanders to consider what-if questions and focus limited protection resources on the most critical resources. For example, the commander can determine the impact of a successful attack on one of two ground stations and place heavier emphasis on protecting the station whose loss would create the most severe impacts. The system is currently transitioning to operational use. The MIDAS system and its interface to the legacy ESD 2.7 system will be described along with the ConOps for different types of detailed operational scenarios.

  10. Significantly improved precision of cell migration analysis in time-lapse video microscopy through use of a fully automated tracking system

    PubMed Central

    2010-01-01

    Background Cell motility is a critical parameter in many physiological as well as pathophysiological processes. In time-lapse video microscopy, manual cell tracking remains the most common method of analyzing migratory behavior of cell populations. In addition to being labor-intensive, this method is susceptible to user-dependent errors regarding the selection of "representative" subsets of cells and manual determination of precise cell positions. Results We have quantitatively analyzed these error sources, demonstrating that manual cell tracking of pancreatic cancer cells lead to mis-calculation of migration rates of up to 410%. In order to provide for objective measurements of cell migration rates, we have employed multi-target tracking technologies commonly used in radar applications to develop fully automated cell identification and tracking system suitable for high throughput screening of video sequences of unstained living cells. Conclusion We demonstrate that our automatic multi target tracking system identifies cell objects, follows individual cells and computes migration rates with high precision, clearly outperforming manual procedures. PMID:20377897

  11. Operation of the Australian Store.Synchrotron for macromolecular crystallography

    PubMed Central

    Meyer, Grischa R.; Aragão, David; Mudie, Nathan J.; Caradoc-Davies, Tom T.; McGowan, Sheena; Bertling, Philip J.; Groenewegen, David; Quenette, Stevan M.; Bond, Charles S.; Buckle, Ashley M.; Androulakis, Steve

    2014-01-01

    The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The service automatically receives and archives raw diffraction data, related metadata and preliminary results of automated data-processing workflows. Data are able to be shared with collaborators and opened to the public. In the nine months since its deployment in August 2013, the service has handled over 22.4 TB of raw data (∼1.7 million diffraction images). Several real examples from the Australian crystallographic community are described that illustrate the advantages of the approach, which include real-time online data access and fully redundant, secure storage. Discoveries in biological sciences increasingly require multidisciplinary approaches. With this in mind, Store.Synchrotron has been developed as a component within a greater service that can combine data from other instruments at the Australian Synchrotron, as well as instruments at the Australian neutron source ANSTO. It is therefore envisaged that this will serve as a model implementation of raw data archiving and dissemination within the structural biology research community. PMID:25286837

  12. Operation of the Australian Store.Synchrotron for macromolecular crystallography.

    PubMed

    Meyer, Grischa R; Aragão, David; Mudie, Nathan J; Caradoc-Davies, Tom T; McGowan, Sheena; Bertling, Philip J; Groenewegen, David; Quenette, Stevan M; Bond, Charles S; Buckle, Ashley M; Androulakis, Steve

    2014-10-01

    The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The service automatically receives and archives raw diffraction data, related metadata and preliminary results of automated data-processing workflows. Data are able to be shared with collaborators and opened to the public. In the nine months since its deployment in August 2013, the service has handled over 22.4 TB of raw data (∼1.7 million diffraction images). Several real examples from the Australian crystallographic community are described that illustrate the advantages of the approach, which include real-time online data access and fully redundant, secure storage. Discoveries in biological sciences increasingly require multidisciplinary approaches. With this in mind, Store.Synchrotron has been developed as a component within a greater service that can combine data from other instruments at the Australian Synchrotron, as well as instruments at the Australian neutron source ANSTO. It is therefore envisaged that this will serve as a model implementation of raw data archiving and dissemination within the structural biology research community.

  13. Geometric aspects in digital analysis of Multi-Spectral Scanner (MSS) data

    NASA Technical Reports Server (NTRS)

    Mikhail, E. M.; Baker, J. R.

    1973-01-01

    Present automated systems of interpretation which apply pattern recognition techniques on MSS data do not fully consider the geometry of the acquisition system. In an effort to improve the usefulness of the MSS data when digitally treated, geometric aspects are analyzed and discussed. Attempts to correct for scanner instabilities in position and orientation by affine and polynomial transformations, as well as by modified collinearity equations are described. Methods of accounting for panoramic and relief effects are also discussed. It is anticipated that reliable area as well as position determinations can be accomplished during the process of automatic interpretation. A concept for a unified approach to the treatment of remote sensing data, both metric and nonmetric is presented.

  14. Advanced Structural Optimization Under Consideration of Cost Tracking

    NASA Astrophysics Data System (ADS)

    Zell, D.; Link, T.; Bickelmaier, S.; Albinger, J.; Weikert, S.; Cremaschi, F.; Wiegand, A.

    2014-06-01

    In order to improve the design process of launcher configurations in the early development phase, the software Multidisciplinary Optimization (MDO) was developed. The tool combines different efficient software tools such as Optimal Design Investigations (ODIN) for structural optimizations, Aerospace Trajectory Optimization Software (ASTOS) for trajectory and vehicle design optimization for a defined payload and mission.The present paper focuses to the integration and validation of ODIN. ODIN enables the user to optimize typical axis-symmetric structures by means of sizing the stiffening designs concerning strength and stability while minimizing the structural mass. In addition a fully automatic finite element model (FEM) generator module creates ready-to-run FEM models of a complete stage or launcher assembly.Cost tracking respectively future improvements concerning cost optimization are indicated.

  15. Predicting shrinkage and warpage in injection molding: Towards automatized mold design

    NASA Astrophysics Data System (ADS)

    Zwicke, Florian; Behr, Marek; Elgeti, Stefanie

    2017-10-01

    It is an inevitable part of any plastics molding process that the material undergoes some shrinkage during solidification. Mainly due to unavoidable inhomogeneities in the cooling process, the overall shrinkage cannot be assumed as homogeneous in all volumetric directions. The direct consequence is warpage. The accurate prediction of such shrinkage and warpage effects has been the subject of a considerable amount of research, but it is important to note that this behavior depends greatly on the type of material that is used as well as the process details. Without limiting ourselves to any specific properties of certain materials or process designs, we aim to develop a method for the automatized design of a mold cavity that will produce correctly shaped moldings after solidification. Essentially, this can be stated as a shape optimization problem, where the cavity shape is optimized to fulfill some objective function that measures defects in the molding shape. In order to be able to develop and evaluate such a method, we first require simulation methods for the diffierent steps involved in the injection molding process that can represent the phenomena responsible for shrinkage and warpage ina sufficiently accurate manner. As a starting point, we consider the solidification of purely amorphous materials. In this case, the material slowly transitions from fluid-like to solid-like behavior as it cools down. This behavior is modeled using adjusted viscoelastic material models. Once the material has passed a certain temperature threshold during cooling, any viscous effects are neglected and the behavior is assumed to be fully elastic. Non-linear elastic laws are used to predict shrinkage and warpage that occur after this point. We will present the current state of these simulation methods and show some first approaches towards optimizing the mold cavity shape based on these methods.

  16. Aerial applications dispersal systems control requirements study. [agriculture

    NASA Technical Reports Server (NTRS)

    Bauchspies, J. S.; Cleary, W. L.; Rogers, W. F.; Simpson, W.; Sanders, G. S.

    1980-01-01

    Performance deficiencies in aerial liquid and dry dispersal systems are identified. Five control system concepts are explored: (1) end of field on/off control; (2) manual control of particle size and application rate from the aircraft; (3) manual control of deposit rate on the field; (4) automatic alarm and shut-off control; and (5) fully automatic control. Operational aspects of the concepts and specifications for improved control configurations are discussed in detail. A research plan to provide the technology needed to develop the proposed improvements is presented along with a flight program to verify the benefits achieved.

  17. The design of digital-adaptive controllers for VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Broussard, J. R.; Berry, P. W.

    1976-01-01

    Design procedures for VTOL automatic control systems have been developed and are presented. Using linear-optimal estimation and control techniques as a starting point, digital-adaptive control laws have been designed for the VALT Research Aircraft, a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. These control laws are designed to interface with velocity-command and attitude-command guidance logic, which could be used in short-haul VTOL operations. Developments reported here include new algorithms for designing non-zero-set-point digital regulators, design procedures for rate-limited systems, and algorithms for dynamic control trim setting.

  18. Job expansion : an additional benefit of a computer aided dispatch/automatic vehicle locator (CAD/AVL) system

    DOT National Transportation Integrated Search

    2000-03-01

    The Denver Regional Transportation District (RTD) acquired a CAD/AVL system that became fully operational in 1996. The CAD/AVL system added radio channels and covert alarms in buses, located vehicles in real time, and monitored schedule adherence. Th...

  19. Applying Terrain and Hydrological Editing to Tandem-X Data to Create a Consumer-Ready Worlddem Product

    NASA Astrophysics Data System (ADS)

    Collins, J.; Riegler, G.; Schrader, H.; Tinz, M.

    2015-04-01

    The Geo-intelligence division of Airbus Defence and Space and the German Aerospace Center (DLR) have partnered to produce the first fully global, high-accuracy Digital Surface Model (DSM) using SAR data from the twin satellite constellation: TerraSAR-X and TanDEM-X. The DLR is responsible for the processing and distribution of the TanDEM-X elevation model for the world's scientific community, while Airbus DS is responsible for the commercial production and distribution of the data, under the brand name WorldDEM. For the provision of a consumer-ready product, Airbus DS undertakes several steps to reduce the effect of radar-specific artifacts in the WorldDEM data. These artifacts can be divided into two categories: terrain and hydrological. Airbus DS has developed proprietary software and processes to detect and correct these artifacts in the most efficient manner. Some processes are fullyautomatic, while others require manual or semi-automatic control by operators.

  20. Conceptual design of the CZMIL data processing system (DPS): algorithms and software for fusing lidar, hyperspectral data, and digital images

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Tuell, Grady

    2010-04-01

    The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.

  1. Semi-automatic brain tumor segmentation by constrained MRFs using structural trajectories.

    PubMed

    Zhao, Liang; Wu, Wei; Corso, Jason J

    2013-01-01

    Quantifying volume and growth of a brain tumor is a primary prognostic measure and hence has received much attention in the medical imaging community. Most methods have sought a fully automatic segmentation, but the variability in shape and appearance of brain tumor has limited their success and further adoption in the clinic. In reaction, we present a semi-automatic brain tumor segmentation framework for multi-channel magnetic resonance (MR) images. This framework does not require prior model construction and only requires manual labels on one automatically selected slice. All other slices are labeled by an iterative multi-label Markov random field optimization with hard constraints. Structural trajectories-the medical image analog to optical flow and 3D image over-segmentation are used to capture pixel correspondences between consecutive slices for pixel labeling. We show robustness and effectiveness through an evaluation on the 2012 MICCAI BRATS Challenge Dataset; our results indicate superior performance to baselines and demonstrate the utility of the constrained MRF formulation.

  2. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    USGS Publications Warehouse

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  3. Automatic digital image analysis for identification of mitotic cells in synchronous mammalian cell cultures.

    PubMed

    Eccles, B A; Klevecz, R R

    1986-06-01

    Mitotic frequency in a synchronous culture of mammalian cells was determined fully automatically and in real time using low-intensity phase-contrast microscopy and a newvicon video camera connected to an EyeCom III image processor. Image samples, at a frequency of one per minute for 50 hours, were analyzed by first extracting the high-frequency picture components, then thresholding and probing for annular objects indicative of putative mitotic cells. Both the extraction of high-frequency components and the recognition of rings of varying radii and discontinuities employed novel algorithms. Spatial and temporal relationships between annuli were examined to discern the occurrences of mitoses, and such events were recorded in a computer data file. At present, the automatic analysis is suited for random cell proliferation rate measurements or cell cycle studies. The automatic identification of mitotic cells as described here provides a measure of the average proliferative activity of the cell population as a whole and eliminates more than eight hours of manual review per time-lapse video recording.

  4. Automatic estimation of extent of resection and residual tumor volume of patients with glioblastoma.

    PubMed

    Meier, Raphael; Porz, Nicole; Knecht, Urspeter; Loosli, Tina; Schucht, Philippe; Beck, Jürgen; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2017-10-01

    OBJECTIVE In the treatment of glioblastoma, residual tumor burden is the only prognostic factor that can be actively influenced by therapy. Therefore, an accurate, reproducible, and objective measurement of residual tumor burden is necessary. This study aimed to evaluate the use of a fully automatic segmentation method-brain tumor image analysis (BraTumIA)-for estimating the extent of resection (EOR) and residual tumor volume (RTV) of contrast-enhancing tumor after surgery. METHODS The imaging data of 19 patients who underwent primary resection of histologically confirmed supratentorial glioblastoma were retrospectively reviewed. Contrast-enhancing tumors apparent on structural preoperative and immediate postoperative MR imaging in this patient cohort were segmented by 4 different raters and the automatic segmentation BraTumIA software. The manual and automatic results were quantitatively compared. RESULTS First, the interrater variabilities in the estimates of EOR and RTV were assessed for all human raters. Interrater agreement in terms of the coefficient of concordance (W) was higher for RTV (W = 0.812; p < 0.001) than for EOR (W = 0.775; p < 0.001). Second, the volumetric estimates of BraTumIA for all 19 patients were compared with the estimates of the human raters, which showed that for both EOR (W = 0.713; p < 0.001) and RTV (W = 0.693; p < 0.001) the estimates of BraTumIA were generally located close to or between the estimates of the human raters. No statistically significant differences were detected between the manual and automatic estimates. BraTumIA showed a tendency to overestimate contrast-enhancing tumors, leading to moderate agreement with expert raters with respect to the literature-based, survival-relevant threshold values for EOR. CONCLUSIONS BraTumIA can generate volumetric estimates of EOR and RTV, in a fully automatic fashion, which are comparable to the estimates of human experts. However, automated analysis showed a tendency to overestimate the volume of a contrast-enhancing tumor, whereas manual analysis is prone to subjectivity, thereby causing considerable interrater variability.

  5. SeisComP 3 - Where are we now?

    NASA Astrophysics Data System (ADS)

    Saul, Joachim; Becker, Jan; Hanka, Winfried; Heinloo, Andres; Weber, Bernd

    2010-05-01

    The seismological software SeisComP has evolved within the last approximately 10 years from a pure acquisition modules to a fully featured real-time earthquake monitoring software. The now very popular SeedLink protocol for seismic data transmission has been the core of SeisComP from the very beginning. Later additions included simple, purely automatic event detection, location and magnitude determination capabilities. Especially within the development of the 3rd-generation SeisComP, also known as "SeisComP 3", automatic processing capabilities have been augmented by graphical user interfaces for vizualization, rapid event review and quality control. Communication between the modules is achieved using a a TCP/IP infrastructure that allows distributed computing and remote review. For seismological metadata exchange export/import to/from QuakeML is avalable, which also provides a convenient interface with 3rd-party software. SeisComP is the primary seismological processing software at the GFZ Potsdam. It has also been in use for years in numerous seismic networks in Europe and, more recently, has been adopted as primary monitoring software by several tsunami warning centers around the Indian Ocean. In our presentation we describe the current status of development as well as future plans. We illustrate its possibilities by discussing different use cases for global and regional real-time earthquake monitoring and tsunami warning.

  6. Automatic Figure Ranking and User Interfacing for Intelligent Figure Search

    PubMed Central

    Yu, Hong; Liu, Feifan; Ramesh, Balaji Polepalli

    2010-01-01

    Background Figures are important experimental results that are typically reported in full-text bioscience articles. Bioscience researchers need to access figures to validate research facts and to formulate or to test novel research hypotheses. On the other hand, the sheer volume of bioscience literature has made it difficult to access figures. Therefore, we are developing an intelligent figure search engine (http://figuresearch.askhermes.org). Existing research in figure search treats each figure equally, but we introduce a novel concept of “figure ranking”: figures appearing in a full-text biomedical article can be ranked by their contribution to the knowledge discovery. Methodology/Findings We empirically validated the hypothesis of figure ranking with over 100 bioscience researchers, and then developed unsupervised natural language processing (NLP) approaches to automatically rank figures. Evaluating on a collection of 202 full-text articles in which authors have ranked the figures based on importance, our best system achieved a weighted error rate of 0.2, which is significantly better than several other baseline systems we explored. We further explored a user interfacing application in which we built novel user interfaces (UIs) incorporating figure ranking, allowing bioscience researchers to efficiently access important figures. Our evaluation results show that 92% of the bioscience researchers prefer as the top two choices the user interfaces in which the most important figures are enlarged. With our automatic figure ranking NLP system, bioscience researchers preferred the UIs in which the most important figures were predicted by our NLP system than the UIs in which the most important figures were randomly assigned. In addition, our results show that there was no statistical difference in bioscience researchers' preference in the UIs generated by automatic figure ranking and UIs by human ranking annotation. Conclusion/Significance The evaluation results conclude that automatic figure ranking and user interfacing as we reported in this study can be fully implemented in online publishing. The novel user interface integrated with the automatic figure ranking system provides a more efficient and robust way to access scientific information in the biomedical domain, which will further enhance our existing figure search engine to better facilitate accessing figures of interest for bioscientists. PMID:20949102

  7. On the meaning of meaning when being mean: commentary on Berkowitz's "on the consideration of automatic as well as controlled psychological processes in aggression".

    PubMed

    Dodge, Kenneth A

    2008-01-01

    Berkowitz (this issue) makes a cogent case for his cognitive neo-associationist (CNA) model that some aggressive behaviors occur automatically, emotionally, and through conditioned association with other stimuli. He also proposes that they can occur without "processing," that is, without meaning. He contrasts his position with that of social information processing (SIP) models, which he casts as positing only controlled processing mechanisms for aggressive behavior. However, both CNA and SIP models posit automatic as well as controlled processes in aggressive behavior. Most aggressive behaviors occur through automatic processes, which are nonetheless rule governed. SIP models differ from the CNA model in asserting the essential role of meaning (often through nonconscious, automatic, and emotional processes) in mediating the link between a stimulus and an angry aggressive behavioral response. Copyright 2008 Wiley-Liss, Inc.

  8. Automaticity of higher cognitive functions: neurophysiological evidence for unconscious syntactic processing of masked words.

    PubMed

    Jiménez-Ortega, Laura; García-Milla, Marcos; Fondevila, Sabela; Casado, Pilar; Hernández-Gutiérrez, David; Martín-Loeches, Manuel

    2014-12-01

    Models of language comprehension assume that syntactic processing is automatic, at least at early stages. However, the degree of automaticity of syntactic processing is still controversial. Evidence of automaticity is either indirect or has been observed for pairs of words, which might provide a poor syntactic context in comparison to sentences. The present study investigates the automaticity of syntactic processing using event-related brain potentials (ERPs) during sentence processing. To this end, masked adjectives that could either be syntactically correct or incorrect relative to a sentence being processed appeared just prior to the presentation of supraliminal adjectives. The latter could also be correct or incorrect. According to our data, subliminal gender agreement violations embedded in a sentence trigger an early anterior negativity-like modulation, whereas supraliminal gender agreement violations elicited a later anterior negativity. First-pass syntactic parsing thus appears to be unconsciously and automatically elicited. Interestingly, a P600-like modulation of short duration and early latency could also be observed for masked violations. In addition, masked violations also modulated the P600 component elicited by unmasked targets, probably reflecting that the mechanisms of revising a structural mismatch appear affected by subliminal information. According to our findings, both conscious and unconscious processes apparently contribute to syntactic processing. These results are discussed in line with most recent theories of automaticity and syntactic processing. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Chemical etching for automatic processing of integrated circuits

    NASA Technical Reports Server (NTRS)

    Kennedy, B. W.

    1981-01-01

    Chemical etching for automatic processing of integrated circuits is discussed. The wafer carrier and loading from a receiving air track into automatic furnaces and unloading onto a sending air track are included.

  10. Computer-aided liver volumetry: performance of a fully-automated, prototype post-processing solution for whole-organ and lobar segmentation based on MDCT imaging.

    PubMed

    Fananapazir, Ghaneh; Bashir, Mustafa R; Marin, Daniele; Boll, Daniel T

    2015-06-01

    To evaluate the performance of a prototype, fully-automated post-processing solution for whole-liver and lobar segmentation based on MDCT datasets. A polymer liver phantom was used to assess accuracy of post-processing applications comparing phantom volumes determined via Archimedes' principle with MDCT segmented datasets. For the IRB-approved, HIPAA-compliant study, 25 patients were enrolled. Volumetry performance compared the manual approach with the automated prototype, assessing intraobserver variability, and interclass correlation for whole-organ and lobar segmentation using ANOVA comparison. Fidelity of segmentation was evaluated qualitatively. Phantom volume was 1581.0 ± 44.7 mL, manually segmented datasets estimated 1628.0 ± 47.8 mL, representing a mean overestimation of 3.0%, automatically segmented datasets estimated 1601.9 ± 0 mL, representing a mean overestimation of 1.3%. Whole-liver and segmental volumetry demonstrated no significant intraobserver variability for neither manual nor automated measurements. For whole-liver volumetry, automated measurement repetitions resulted in identical values; reproducible whole-organ volumetry was also achieved with manual segmentation, p(ANOVA) 0.98. For lobar volumetry, automated segmentation improved reproducibility over manual approach, without significant measurement differences for either methodology, p(ANOVA) 0.95-0.99. Whole-organ and lobar segmentation results from manual and automated segmentation showed no significant differences, p(ANOVA) 0.96-1.00. Assessment of segmentation fidelity found that segments I-IV/VI showed greater segmentation inaccuracies compared to the remaining right hepatic lobe segments. Automated whole-liver segmentation showed non-inferiority of fully-automated whole-liver segmentation compared to manual approaches with improved reproducibility and post-processing duration; automated dual-seed lobar segmentation showed slight tendencies for underestimating the right hepatic lobe volume and greater variability in edge detection for the left hepatic lobe compared to manual segmentation.

  11. Negative Life Events and Antenatal Depression among Pregnant Women in Rural China: The Role of Negative Automatic Thoughts.

    PubMed

    Wang, Yang; Wang, Xiaohua; Liu, Fangnan; Jiang, Xiaoning; Xiao, Yun; Dong, Xuehan; Kong, Xianglei; Yang, Xuemei; Tian, Donghua; Qu, Zhiyong

    2016-01-01

    Few studies have looked at the relationship between psychological and the mental health status of pregnant women in rural China. The current study aims to explore the potential mediating effect of negative automatic thoughts between negative life events and antenatal depression. Data were collected in June 2012 and October 2012. 495 rural pregnant women were interviewed. Depressive symptoms were measured by the Edinburgh postnatal depression scale, stresses of pregnancy were measured by the pregnancy pressure scale, negative automatic thoughts were measured by the automatic thoughts questionnaire, and negative life events were measured by the life events scale for pregnant women. We used logistic regression and path analysis to test the mediating effect. The prevalence of antenatal depression was 13.7%. In the logistic regression, the only socio-demographic and health behavior factor significantly related to antenatal depression was sleep quality. Negative life events were not associated with depression in the fully adjusted model. Path analysis showed that the eventual direct and general effects of negative automatic thoughts were 0.39 and 0.51, which were larger than the effects of negative life events. This study suggested that there was a potentially significant mediating effect of negative automatic thoughts. Pregnant women who had lower scores of negative automatic thoughts were more likely to suffer less from negative life events which might lead to antenatal depression.

  12. The Interplay between Automatic and Control Processes in Reading.

    ERIC Educational Resources Information Center

    Walczyk, Jeffrey J.

    2000-01-01

    Reviews prominent reading theories in light of their accounts of how automatic and control processes combine to produce successful text comprehension, and the trade-offs between the two. Presents the Compensatory-Encoding Model of reading, which explicates how, when, and why automatic and control processes interact. Notes important educational…

  13. Automated Signal Processing Applied to Volatile-Based Inspection of Greenhouse Crops

    PubMed Central

    Jansen, Roel; Hofstee, Jan Willem; Bouwmeester, Harro; van Henten, Eldert

    2010-01-01

    Gas chromatograph–mass spectrometers (GC-MS) have been used and shown utility for volatile-based inspection of greenhouse crops. However, a widely recognized difficulty associated with GC-MS application is the large and complex data generated by this instrument. As a consequence, experienced analysts are often required to process this data in order to determine the concentrations of the volatile organic compounds (VOCs) of interest. Manual processing is time-consuming, labour intensive and may be subject to errors due to fatigue. The objective of this study was to assess whether or not GC-MS data can also be automatically processed in order to determine the concentrations of crop health associated VOCs in a greenhouse. An experimental dataset that consisted of twelve data files was processed both manually and automatically to address this question. Manual processing was based on simple peak integration while the automatic processing relied on the algorithms implemented in the MetAlign™ software package. The results of automatic processing of the experimental dataset resulted in concentrations similar to that after manual processing. These results demonstrate that GC-MS data can be automatically processed in order to accurately determine the concentrations of crop health associated VOCs in a greenhouse. When processing GC-MS data automatically, noise reduction, alignment, baseline correction and normalisation are required. PMID:22163594

  14. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  15. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  16. Fully automatic characterization and data collection from crystals of biological macromolecules.

    PubMed

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier; Bowler, Matthew W

    2015-08-01

    Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.

  17. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    NASA Astrophysics Data System (ADS)

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-03-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.

  18. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry.

    PubMed

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-03-22

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.

  19. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    PubMed Central

    Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  20. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    PubMed Central

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-01-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83–0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments. PMID:27001047

  1. Using Machine Learning to Increase Research Efficiency: A New Approach in Environmental Sciences

    USDA-ARS?s Scientific Manuscript database

    Data collection has evolved from tedious in-person fieldwork to automatic data gathering from multiple sensor remotely. Scientist in environmental sciences have not fully exploited this data deluge, including legacy and new data, because the traditional scientific method is focused on small, high qu...

  2. The TREC Interactive Track: An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Over, Paul

    2001-01-01

    Discussion of the study of interactive information retrieval (IR) at the Text Retrieval Conferences (TREC) focuses on summaries of the Interactive Track at each conference. Describes evolution of the track, which has changed from comparing human-machine systems with fully automatic systems to comparing interactive systems that focus on the search…

  3. 46 CFR 112.39-1 - General.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Battery Operated Lanterns § 112.39-1 General. (a) Each battery-operated, relay-controlled lantern used in accordance with Table 112.05-5(a) must: (1) Have rechargeable batteries; (2) Have an automatic battery charger that maintains the battery in a fully charged condition; and (3) Not be readily portable. [CGD 74...

  4. 75 FR 35447 - Buy American Exception Under the American Recovery and Reinvestment Act of 2009; Nationwide...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-22

    ... Reinvestment and Recovery Act of 2009 (Recovery Act) to EERE-funded projects for non-residential programmable...[hyphen]residential programmable thermostats; commercial scale fully-automatic wood pellet boiler systems...) Programmable Thermostats--Includes devices that permit adjustment of heating or air-conditioning operations...

  5. 46 CFR 112.39-1 - General.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Battery Operated Lanterns § 112.39-1 General. (a) Each battery-operated, relay-controlled lantern used in accordance with Table 112.05-5(a) must: (1) Have rechargeable batteries; (2) Have an automatic battery charger that maintains the battery in a fully charged condition; and (3) Not be readily portable. [CGD 74...

  6. 46 CFR 112.39-1 - General.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Battery Operated Lanterns § 112.39-1 General. (a) Each battery-operated, relay-controlled lantern used in accordance with Table 112.05-5(a) must: (1) Have rechargeable batteries; (2) Have an automatic battery charger that maintains the battery in a fully charged condition; and (3) Not be readily portable. [CGD 74...

  7. 46 CFR 112.39-1 - General.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Battery Operated Lanterns § 112.39-1 General. (a) Each battery-operated, relay-controlled lantern used in accordance with Table 112.05-5(a) must: (1) Have rechargeable batteries; (2) Have an automatic battery charger that maintains the battery in a fully charged condition; and (3) Not be readily portable. [CGD 74...

  8. 46 CFR 112.39-1 - General.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Battery Operated Lanterns § 112.39-1 General. (a) Each battery-operated, relay-controlled lantern used in accordance with Table 112.05-5(a) must: (1) Have rechargeable batteries; (2) Have an automatic battery charger that maintains the battery in a fully charged condition; and (3) Not be readily portable. [CGD 74...

  9. Assessment of Automatically Exported Clinical Data from a Hospital Information System for Clinical Research in Multiple Myeloma.

    PubMed

    Torres, Viviana; Cerda, Mauricio; Knaup, Petra; Löpprich, Martin

    2016-01-01

    An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol.

  10. Distributed pheromone-based swarming control of unmanned air and ground vehicles for RSTA

    NASA Astrophysics Data System (ADS)

    Sauter, John A.; Mathews, Robert S.; Yinger, Andrew; Robinson, Joshua S.; Moody, John; Riddle, Stephanie

    2008-04-01

    The use of unmanned vehicles in Reconnaissance, Surveillance, and Target Acquisition (RSTA) applications has received considerable attention recently. Cooperating land and air vehicles can support multiple sensor modalities providing pervasive and ubiquitous broad area sensor coverage. However coordination of multiple air and land vehicles serving different mission objectives in a dynamic and complex environment is a challenging problem. Swarm intelligence algorithms, inspired by the mechanisms used in natural systems to coordinate the activities of many entities provide a promising alternative to traditional command and control approaches. This paper describes recent advances in a fully distributed digital pheromone algorithm that has demonstrated its effectiveness in managing the complexity of swarming unmanned systems. The results of a recent demonstration at NASA's Wallops Island of multiple Aerosonde Unmanned Air Vehicles (UAVs) and Pioneer Unmanned Ground Vehicles (UGVs) cooperating in a coordinated RSTA application are discussed. The vehicles were autonomously controlled by the onboard digital pheromone responding to the needs of the automatic target recognition algorithms. UAVs and UGVs controlled by the same pheromone algorithm self-organized to perform total area surveillance, automatic target detection, sensor cueing, and automatic target recognition with no central processing or control and minimal operator input. Complete autonomy adds several safety and fault tolerance requirements which were integrated into the basic pheromone framework. The adaptive algorithms demonstrated the ability to handle some unplanned hardware failures during the demonstration without any human intervention. The paper describes lessons learned and the next steps for this promising technology.

  11. GISentinel: a software platform for automatic ulcer detection on capsule endoscopy videos

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Jiao, Heng; Meng, Fan; Leighton, Jonathon A.; Shabana, Pasha; Rentz, Lauri

    2014-03-01

    In this paper, we present a novel and clinically valuable software platform for automatic ulcer detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos take about 8 hours. They have to be reviewed manually by physicians to detect and locate diseases such as ulcers and bleedings. The process is time consuming. Moreover, because of the long-time manual review, it is easy to lead to miss-finding. Working with our collaborators, we were focusing on developing a software platform called GISentinel, which can fully automated GI tract ulcer detection and classification. This software includes 3 parts: the frequency based Log-Gabor filter regions of interest (ROI) extraction, the unique feature selection and validation method (e.g. illumination invariant feature, color independent features, and symmetrical texture features), and the cascade SVM classification for handling "ulcer vs. non-ulcer" cases. After the experiments, this SW gave descent results. In frame-wise, the ulcer detection rate is 69.65% (319/458). In instance-wise, the ulcer detection rate is 82.35%(28/34).The false alarm rate is 16.43% (34/207). This work is a part of our innovative 2D/3D based GI tract disease detection software platform. The final goal of this SW is to find and classification of major GI tract diseases intelligently, such as bleeding, ulcer, and polyp from the CE videos. This paper will mainly describe the automatic ulcer detection functional module.

  12. Pilot control through the TAFCOS automatic flight control system

    NASA Technical Reports Server (NTRS)

    Wehrend, W. R., Jr.

    1979-01-01

    The set of flight control logic used in a recently completed flight test program to evaluate the total automatic flight control system (TAFCOS) with the controller operating in a fully automatic mode, was used to perform an unmanned simulation on an IBM 360 computer in which the TAFCOS concept was extended to provide a multilevel pilot interface. A pilot TAFCOS interface for direct pilot control by use of a velocity-control-wheel-steering mode was defined as well as a means for calling up conventional autopilot modes. It is concluded that the TAFCOS structure is easily adaptable to the addition of a pilot control through a stick-wheel-throttle control similar to conventional airplane controls. Conventional autopilot modes, such as airspeed-hold, altitude-hold, heading-hold, and flight path angle-hold, can also be included.

  13. A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.

    PubMed

    Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang

    2009-01-01

    This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.

  14. Automatic Semantic Segmentation of Brain Gliomas from MRI Images Using a Deep Cascaded Neural Network.

    PubMed

    Cui, Shaoguo; Mao, Lei; Jiang, Jingfeng; Liu, Chang; Xiong, Shuyu

    2018-01-01

    Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.

  15. An Auto-Tuning PI Control System for an Open-Circuit Low-Speed Wind Tunnel Designed for Greenhouse Technology

    PubMed Central

    Espinoza, Karlos; Valera, Diego L.; Torres, José A.; López, Alejandro; Molina-Aiz, Francisco D.

    2015-01-01

    Wind tunnels are a key experimental tool for the analysis of airflow parameters in many fields of application. Despite their great potential impact on agricultural research, few contributions have dealt with the development of automatic control systems for wind tunnels in the field of greenhouse technology. The objective of this paper is to present an automatic control system that provides precision and speed of measurement, as well as efficient data processing in low-speed wind tunnel experiments for greenhouse engineering applications. The system is based on an algorithm that identifies the system model and calculates the optimum PI controller. The validation of the system was performed on a cellulose evaporative cooling pad and on insect-proof screens to assess its response to perturbations. The control system provided an accuracy of <0.06 m·s−1 for airflow speed and <0.50 Pa for pressure drop, thus permitting the reproducibility and standardization of the tests. The proposed control system also incorporates a fully-integrated software unit that manages the tests in terms of airflow speed and pressure drop set points. PMID:26274962

  16. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling.

    PubMed

    Xiao, Bo; Huang, Chewei; Imel, Zac E; Atkins, David C; Georgiou, Panayiotis; Narayanan, Shrikanth S

    2016-04-01

    Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy-a key therapy quality index-from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training.

  17. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling

    PubMed Central

    Xiao, Bo; Huang, Chewei; Imel, Zac E.; Atkins, David C.; Georgiou, Panayiotis; Narayanan, Shrikanth S.

    2016-01-01

    Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy—a key therapy quality index—from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training. PMID:28286867

  18. Flagstaff Robotic Survey Telescope (FRoST): Rapid Response for NEOs

    NASA Astrophysics Data System (ADS)

    Avner, Louis Daniel; Trilling, David E.; Dunham, Edward W.

    2016-10-01

    The Flagstaff Robotic Survey Telescope (FRoST) is a robotic 0.6m Schmidt telescope that will be used for instant follow-up observations of newly discovered Near Earth Objects (NEOs). Here, we present the progress being made on FRoST as well as the remaining tasks until the telescope is fully operational. With more than one thousand NEOs being found yearly, more telescopes are needed to carry out follow-up observations. Most NEOs are found at their peak brightness, meaning that these observations need to happen quickly before they fade. By using the Catalina Sky Survey Queue Manager, FRoST will be able to accept interruptions during the night and prioritize observations automatically, allowing instant follow-up observations. FRoST will help refine the orbit of these newly discovered objects while providing optical colors. We will ingest information from the NEOCP and JPL's Scout program at five minute intervals and observe newly discovered targets robotically, process the data automatically, and autonomously generate astrometry and colors. We estimate that will we provide essentially 100% recovery of objects brighter than V~20. This work was supported by the NSF MRI program as well as by NAU and Lowell Observatory.

  19. Automated detection of diabetic retinopathy on digital fundus images.

    PubMed

    Sinthanayothin, C; Boyce, J F; Williamson, T H; Cook, H L; Mensah, E; Lal, S; Usher, D

    2002-02-01

    The aim was to develop an automated screening system to analyse digital colour retinal images for important features of non-proliferative diabetic retinopathy (NPDR). High performance pre-processing of the colour images was performed. Previously described automated image analysis systems were used to detect major landmarks of the retinal image (optic disc, blood vessels and fovea). Recursive region growing segmentation algorithms combined with the use of a new technique, termed a 'Moat Operator', were used to automatically detect features of NPDR. These features included haemorrhages and microaneurysms (HMA), which were treated as one group, and hard exudates as another group. Sensitivity and specificity data were calculated by comparison with an experienced fundoscopist. The algorithm for exudate recognition was applied to 30 retinal images of which 21 contained exudates and nine were without pathology. The sensitivity and specificity for exudate detection were 88.5% and 99.7%, respectively, when compared with the ophthalmologist. HMA were present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and specificity of 88.7% for detection of HMA. Fully automated computer algorithms were able to detect hard exudates and HMA. This paper presents encouraging results in automatic identification of important features of NPDR.

  20. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    PubMed

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  1. Flow-Based Assembly of Layer-by-Layer Capsules through Tangential Flow Filtration.

    PubMed

    Björnmalm, Mattias; Roozmand, Ali; Noi, Ka Fung; Guo, Junling; Cui, Jiwei; Richardson, Joseph J; Caruso, Frank

    2015-08-25

    Layer-by-layer (LbL) assembly on nano- and microparticles is of interest for a range of applications, including catalysis, optics, sensors, and drug delivery. One current limitation is the standard use of manual, centrifugation-based (pellet/resuspension) methods to perform the layering steps, which can make scalable, highly controllable, and automatable production difficult to achieve. Here, we develop a fully flow-based technique using tangential flow filtration (TFF) for LbL assembly on particles. We demonstrate that multilayered particles and capsules with different sizes (from micrometers to submicrometers in diameter) can be assembled on different templates (e.g., silica and calcium carbonate) using several polymers (e.g., poly(allylamine hydrochloride), poly(styrenesulfonate), and poly(diallyldimethylammonium chloride)). The full system only contains fluidic components routinely used (and automated) in industry, such as pumps, tanks, valves, and tubing in addition to the TFF filter modules. Using the TFF LbL system, we also demonstrate the centrifugation-free assembly, including core dissolution, of drug-loaded capsules. The well-controlled, integrated, and automatable nature of the TFF LbL system provides scientific, engineering, and practical processing benefits, making it valuable for research environments and potentially useful for translating LbL assembled particles into diverse applications.

  2. Fully Automatic Segmentation of Fluorescein Leakage in Subjects With Diabetic Macular Edema

    PubMed Central

    Rabbani, Hossein; Allingham, Michael J.; Mettu, Priyatham S.; Cousins, Scott W.; Farsiu, Sina

    2015-01-01

    Purpose. To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Methods. Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. Results. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Conclusions. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. PMID:25634978

  3. Fully automatic segmentation of fluorescein leakage in subjects with diabetic macular edema.

    PubMed

    Rabbani, Hossein; Allingham, Michael J; Mettu, Priyatham S; Cousins, Scott W; Farsiu, Sina

    2015-01-29

    To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  4. Measuring automatic retrieval: a comparison of implicit memory, process dissociation, and speeded response procedures.

    PubMed

    Horton, Keith D; Wilson, Daryl E; Vonk, Jennifer; Kirby, Sarah L; Nielsen, Tina

    2005-07-01

    Using the stem completion task, we compared estimates of automatic retrieval from an implicit memory task, the process dissociation procedure, and the speeded response procedure. Two standard manipulations were employed. In Experiment 1, a depth of processing effect was found on automatic retrieval using the speeded response procedure although this effect was substantially reduced in Experiment 2 when lexical processing was required of all words. In Experiment 3, the speeded response procedure showed an advantage of full versus divided attention at study on automatic retrieval. An implicit condition showed parallel effects in each study, suggesting that implicit stem completion may normally provide a good estimate of automatic retrieval. Also, we replicated earlier findings from the process dissociation procedure, but estimates of automatic retrieval from this procedure were consistently lower than those from the speeded response procedure, except when conscious retrieval was relatively low. We discuss several factors that may contribute to the conflicting outcomes, including the evidence for theoretical assumptions and criterial task differences between implicit and explicit tests.

  5. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any Eclipse-based repository with a similar structure. It also can apply build parameters and preferences automatically at the end of the checkout.

  6. Automated x-ray/light field congruence using the LINAC EPID panel.

    PubMed

    Polak, Wojciech; O'Doherty, Jim; Jones, Matt

    2013-03-01

    X-ray/light field alignment is a test described in many guidelines for the routine quality control of clinical linear accelerators (LINAC). Currently, the gold standard method for measuring alignment is through utilization of radiographic film. However, many modern LINACs are equipped with an electronic portal imaging device (EPID) that may be used to perform this test and thus subsequently reducing overall cost, processing, and analysis time, removing operator dependency and the requirement to sustain the departmental film processor. This work describes a novel method of utilizing the EPID together with a custom inhouse designed jig and automatic image processing software allowing measurement of the light field size, x-ray field size, and congruence between them. The authors present results of testing the method for aS1000 and aS500 Varian EPID detectors for six LINACs at a range of energies (6, 10, and 15 MV) in comparison with the results obtained from the use of radiographic film. Reproducibility of the software in fully automatic operation under a range of operating conditions for a single image showed a congruence of 0.01 cm with a coefficient of variation of 0. Slight variation in congruence repeatability was noted through semiautomatic processing by four independent operators due to manual marking of positions on the jig. Testing of the methodology using the automatic method shows a high precision of 0.02 mm compared to a maximum of 0.06 mm determined by film processing. Intraindividual examination of operator measurements of congruence was shown to vary as much as 0.75 mm. Similar congruence measurements of 0.02 mm were also determined for a lower resolution EPID (aS500 model), after rescaling of the image to the aS1000 image size. The designed methodology was proven to be time efficient, cost effective, and at least as accurate as using the gold standard radiographic film. Additionally, congruence testing can be easily performed for all four cardinal gantry angles which can be difficult when using radiographic film. Therefore, the authors propose it can be used as an alternative to the radiographic film method allowing decommissioning of the film processor.

  7. Automatic thermographic image defect detection of composites

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Liebenberg, Bjorn; Raymont, Jeff; Santospirito, SP

    2011-05-01

    Detecting defects, and especially reliably measuring defect sizes, are critical objectives in automatic NDT defect detection applications. In this work, the Sentence software is proposed for the analysis of pulsed thermography and near IR images of composite materials. Furthermore, the Sentence software delivers an end-to-end, user friendly platform for engineers to perform complete manual inspections, as well as tools that allow senior engineers to develop inspection templates and profiles, reducing the requisite thermographic skill level of the operating engineer. Finally, the Sentence software can also offer complete independence of operator decisions by the fully automated "Beep on Defect" detection functionality. The end-to-end automatic inspection system includes sub-systems for defining a panel profile, generating an inspection plan, controlling a robot-arm and capturing thermographic images to detect defects. A statistical model has been built to analyze the entire image, evaluate grey-scale ranges, import sentencing criteria and automatically detect impact damage defects. A full width half maximum algorithm has been used to quantify the flaw sizes. The identified defects are imported into the sentencing engine which then sentences (automatically compares analysis results against acceptance criteria) the inspection by comparing the most significant defect or group of defects against the inspection standards.

  8. Testing interactive effects of automatic and conflict control processes during response inhibition - A system neurophysiological study.

    PubMed

    Chmielewski, Witold X; Beste, Christian

    2017-02-01

    In everyday life successful acting often requires to inhibit automatic responses that might not be appropriate in the current situation. These response inhibition processes have been shown to become aggravated with increasing automaticity of pre-potent response tendencies. Likewise, it has been shown that inhibitory processes are complicated by a concurrent engagement in additional cognitive control processes (e.g. conflicting monitoring). Therefore, opposing processes (i.e. automaticity and cognitive control) seem to strongly impact response inhibition. However, possible interactive effects of automaticity and cognitive control for the modulation of response inhibition processes have yet not been examined. In the current study we examine this question using a novel experimental paradigm combining a Go/NoGo with a Simon task in a system neurophysiological approach combining EEG recordings with source localization analyses. The results show that response inhibition is less accurate in non-conflicting than in conflicting stimulus-response mappings. Thus it seems that conflicts and the resulting engagement in conflict monitoring processes, as reflected in the N2 amplitude, may foster response inhibition processes. This engagement in conflict monitoring processes leads to an increase in cognitive control, as reflected by an increased activity in the anterior and posterior cingulate areas, while simultaneously the automaticity of response tendencies is decreased. Most importantly, this study suggests that the quality of conflict processes in anterior cingulate areas and especially the resulting interaction of cognitive control and automaticity of pre-potent response tendencies are important factors to consider, when it comes to the modulation of response inhibition processes. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Automaticity in Anxiety Disorders and Major Depressive Disorder

    PubMed Central

    Teachman, Bethany A.; Joormann, Jutta; Steinman, Shari; Gotlib, Ian H.

    2012-01-01

    In this paper we examine the nature of automatic cognitive processing in anxiety disorders and Major Depressive Disorder (MDD). Rather than viewing automaticity as a unitary construct, we follow a social cognition perspective (Bargh, 1994) that argues for four theoretically independent features of automaticity: unconscious (processing of emotional stimuli occurs outside awareness), efficient (processing emotional meaning uses minimal attentional resources), unintentional (no goal is needed to engage in processing emotional meaning), and uncontrollable (limited ability to avoid, alter or terminate processing emotional stimuli). Our review of the literature suggests that most anxiety disorders are characterized by uncontrollable, and likely also unconscious and unintentional, biased processing of threat-relevant information. In contrast, MDD is most clearly typified by uncontrollable, but not unconscious or unintentional, processing of negative information. For the anxiety disorders and for MDD, there is not sufficient evidence to draw firm conclusions about efficiency of processing, though early indications are that neither anxiety disorders nor MDD are characterized by this feature. Clinical and theoretical implications of these findings are discussed and directions for future research are offered. In particular, it is clear that paradigms that more directly delineate the different features of automaticity are required to gain a more comprehensive and systematic understanding of the importance of automatic processing in emotion dysregulation. PMID:22858684

  10. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques: A Mid-term Report

    NASA Astrophysics Data System (ADS)

    Muller, J.-P.; Yershov, V.; Sidiropoulos, P.; Gwinner, K.; Willner, K.; Fanara, L.; Waelisch, M.; van Gasselt, S.; Walter, S.; Ivanov, A.; Cantini, F.; Morley, J. G.; Sprinks, J.; Giordano, M.; Wardlaw, J.; Kim, J.-R.; Chen, W.-T.; Houghton, R.; Bamford, S.

    2015-10-01

    Understanding the role of different solid surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10s of cms) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the potential to be able to overlay different epochs back to the mid-1970s. Within iMars, a processing system has been developed to generate 3D Digital Terrain Models (DTMs) and corresponding OrthoRectified Images (ORIs) fully automatically from NASA MRO HiRISE and CTX stereo-pairs which are coregistered to corresponding HRSC ORI/DTMs. In parallel, iMars has developed a fully automated processing chain for co-registering level-1 (EDR) images from all previous NASA orbital missions to these HRSC ORIs and in the case of HiRISE these are further co-registered to previously co-registered CTX-to-HRSC ORIs. Examples will be shown of these multi-resolution ORIs and the application of different data mining algorithms to change detection using these co-registered images. iMars has recently launched a citizen science experiment to evaluate best practices for future citizen scientist validation of such data mining processed results. An example of the iMars website will be shown along with an embedded Version 0 prototype of a webGIS based on OGC standards.

  11. Improved automated lumen contour detection by novel multifrequency processing algorithm with current intravascular ultrasound system.

    PubMed

    Kume, Teruyoshi; Kim, Byeong-Keuk; Waseda, Katsuhisa; Sathyanarayana, Shashidhar; Li, Wenguang; Teo, Tat-Jin; Yock, Paul G; Fitzgerald, Peter J; Honda, Yasuhiro

    2013-02-01

    The aim of this study was to evaluate a new fully automated lumen border tracing system based on a novel multifrequency processing algorithm. We developed the multifrequency processing method to enhance arterial lumen detection by exploiting the differential scattering characteristics of blood and arterial tissue. The implementation of the method can be integrated into current intravascular ultrasound (IVUS) hardware. This study was performed in vivo with conventional 40-MHz IVUS catheters (Atlantis SR Pro™, Boston Scientific Corp, Natick, MA) in 43 clinical patients with coronary artery disease. A total of 522 frames were randomly selected, and lumen areas were measured after automatically tracing lumen borders with the new tracing system and a commercially available tracing system (TraceAssist™) referred to as the "conventional tracing system." The data assessed by the two automated systems were compared with the results of manual tracings by experienced IVUS analysts. New automated lumen measurements showed better agreement with manual lumen area tracings compared with those of the conventional tracing system (correlation coefficient: 0.819 vs. 0.509). When compared against manual tracings, the new algorithm also demonstrated improved systematic error (mean difference: 0.13 vs. -1.02 mm(2) ) and random variability (standard deviation of difference: 2.21 vs. 4.02 mm(2) ) compared with the conventional tracing system. This preliminary study showed that the novel fully automated tracing system based on the multifrequency processing algorithm can provide more accurate lumen border detection than current automated tracing systems and thus, offer a more reliable quantitative evaluation of lumen geometry. Copyright © 2011 Wiley Periodicals, Inc.

  12. Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.

    PubMed

    Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou

    2017-05-10

    Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.

  13. FreeSurfer-initiated fully-automated subcortical brain segmentation in MRI using Large Deformation Diffeomorphic Metric Mapping.

    PubMed

    Khan, Ali R; Wang, Lei; Beg, Mirza Faisal

    2008-07-01

    Fully-automated brain segmentation methods have not been widely adopted for clinical use because of issues related to reliability, accuracy, and limitations of delineation protocol. By combining the probabilistic-based FreeSurfer (FS) method with the Large Deformation Diffeomorphic Metric Mapping (LDDMM)-based label-propagation method, we are able to increase reliability and accuracy, and allow for flexibility in template choice. Our method uses the automated FreeSurfer subcortical labeling to provide a coarse-to-fine introduction of information in the LDDMM template-based segmentation resulting in a fully-automated subcortical brain segmentation method (FS+LDDMM). One major advantage of the FS+LDDMM-based approach is that the automatically generated segmentations generated are inherently smooth, thus subsequent steps in shape analysis can directly follow without manual post-processing or loss of detail. We have evaluated our new FS+LDDMM method on several databases containing a total of 50 subjects with different pathologies, scan sequences and manual delineation protocols for labeling the basal ganglia, thalamus, and hippocampus. In healthy controls we report Dice overlap measures of 0.81, 0.83, 0.74, 0.86 and 0.75 for the right caudate nucleus, putamen, pallidum, thalamus and hippocampus respectively. We also find statistically significant improvement of accuracy in FS+LDDMM over FreeSurfer for the caudate nucleus and putamen of Huntington's disease and Tourette's syndrome subjects, and the right hippocampus of Schizophrenia subjects.

  14. TU-CD-BRA-04: Evaluation of An Atlas-Based Segmentation Method for Prostate and Peripheral Zone Regions On MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, AS; Piper, J; Curry, K

    2015-06-15

    Purpose: Prostate MRI plays an important role in diagnosis, biopsy guidance, and therapy planning for prostate cancer. Prostate MRI contours can be used to aid in image fusion for ultrasound biopsy guidance and delivery of radiation. Our goal in this study is to evaluate an automatic atlas-based segmentation method for generating prostate and peripheral zone (PZ) contours on MRI. Methods: T2-weighted MRIs were acquired on 3T-Discovery MR750 System (GE, Milwaukee). The Volumes of Interest (VOIs): prostate and PZ were outlined by an expert radiation oncologist and used to create an atlas library for atlas-based segmentation. The atlas-segmentation accuracy was evaluatedmore » using a leave-one-out analysis. The method involved automatically finding the atlas subject that best matched the test subject followed by a normalized intensity-based free-form deformable registration of the atlas subject to the test subject. The prostate and PZ contours were transformed to the test subject using the same deformation. For each test subject the three best matches were used and the final contour was combined using Majority Vote. The atlas-segmentation process was fully automatic. Dice similarity coefficients (DSC) and mean Hausdorff values were used for comparison. Results: VOIs contours were available for 28 subjects. For the prostate, the atlas-based segmentation method resulted in an average DSC of 0.88+/−0.08 and a mean Hausdorff distance of 1.1+/−0.9mm. The number of patients (#) in DSC ranges are as follows: 0.60–0.69(1), 0.70–0.79(2), 0.80–0.89(13), >0.89(11). For the PZ, the average DSC was 0.72+/−0.17 and average Hausdorff of 0.9+/−0.9mm. The number of patients (#) in DSC ranges are as follows: <0.60(4), 0.60–0.69(6), 0.70–0.79(7), 0.80–0.89(9), >0.89(1). Conclusion: The MRI atlas-based segmentation method achieved good results for both the whole prostate and PZ compared to expert defined VOIs. The technique is fast, fully automatic, and has the potential to provide significant time savings for prostate VOI definition. AS Nelson and J Piper are partial owners of MIM Software, Inc. AS Nelson, J Piper, K Curry, and A Swallen are current employees at MIM Software, Inc.« less

  15. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    NASA Astrophysics Data System (ADS)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2009-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.

  16. A Fully-Automatic Method to Segment the Carotid Artery Layers in Ultrasound Imaging: Application to Quantify the Compression-Decompression Pattern of the Intima-Media Complex During the Cardiac Cycle.

    PubMed

    Zahnd, Guillaume; Kapellas, Kostas; van Hattem, Martijn; van Dijk, Anouk; Sérusclat, André; Moulin, Philippe; van der Lugt, Aad; Skilton, Michael; Orkisz, Maciej

    2017-01-01

    The aim of this study was to introduce and evaluate a contour segmentation method to extract the interfaces of the intima-media complex in carotid B-mode ultrasound images. The method was applied to assess the temporal variation of intima-media thickness during the cardiac cycle. The main methodological contribution of the proposed approach is the introduction of an augmented dimension to process 2-D images in a 3-D space. The third dimension, which is added to the two spatial dimensions of the image, corresponds to the tentative local thickness of the intima-media complex. The method is based on a dynamic programming scheme that runs in a 3-D space generated with a shape-adapted filter bank. The optimal solution corresponds to a single medial axis representation that fully describes the two anatomical interfaces of the arterial wall. The method is fully automatic and does not require any input from the user. The method was trained on 60 subjects and validated on 184 other subjects from six different cohorts and four different medical centers. The arterial wall was successfully segmented in all analyzed images (average pixel size = 57 ± 20 mm), with average segmentation errors of 47 ± 70 mm for the lumen-intima interface, 55 ± 68 mm for the media-adventitia interface and 66 ± 90 mm for the intima-media thickness. The amplitude of the temporal variations in IMT during the cardiac cycle was significantly higher in the diseased population than in healthy volunteers (106 ± 48 vs. 86 ± 34 mm, p = 0.001). The introduced framework is a promising approach to investigate an emerging functional parameter of the arterial wall by assessing the cyclic compression-decompression pattern of the tissues. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  17. Test of a potential link between analytic and nonanalytic category learning and automatic, effortful processing.

    PubMed

    Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J

    2001-08-01

    The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.

  18. Automatic extraction of disease-specific features from Doppler images

    NASA Astrophysics Data System (ADS)

    Negahdar, Mohammadreza; Moradi, Mehdi; Parajuli, Nripesh; Syeda-Mahmood, Tanveer

    2017-03-01

    Flow Doppler imaging is widely used by clinicians to detect diseases of the valves. In particular, continuous wave (CW) Doppler mode scan is routinely done during echocardiography and shows Doppler signal traces over multiple heart cycles. Traditionally, echocardiographers have manually traced such velocity envelopes to extract measurements such as decay time and pressure gradient which are then matched to normal and abnormal values based on clinical guidelines. In this paper, we present a fully automatic approach to deriving these measurements for aortic stenosis retrospectively from echocardiography videos. Comparison of our method with measurements made by echocardiographers shows large agreement as well as identification of new cases missed by echocardiographers.

  19. Gait analysis--precise, rapid, automatic, 3-D position and orientation kinematics and dynamics.

    PubMed

    Mann, R W; Antonsson, E K

    1983-01-01

    A fully automatic optoelectronic photogrammetric technique is presented for measuring the spatial kinematics of human motion (both position and orientation) and estimating the inertial (net) dynamics. Calibration and verification showed that in a two-meter cube viewing volume, the system achieves one millimeter of accuracy and resolution in translation and 20 milliradians in rotation. Since double differentiation of generalized position data to determine accelerations amplifies noise, the frequency domain characteristics of the system were investigated. It was found that the noise and all other errors in the kinematic data contribute less than five percent error to the resulting dynamics.

  20. MICRA: an automatic pipeline for fast characterization of microbial genomes from high-throughput sequencing data.

    PubMed

    Caboche, Ségolène; Even, Gaël; Loywick, Alexandre; Audebert, Christophe; Hot, David

    2017-12-19

    The increase in available sequence data has advanced the field of microbiology; however, making sense of these data without bioinformatics skills is still problematic. We describe MICRA, an automatic pipeline, available as a web interface, for microbial identification and characterization through reads analysis. MICRA uses iterative mapping against reference genomes to identify genes and variations. Additional modules allow prediction of antibiotic susceptibility and resistance and comparing the results of several samples. MICRA is fast, producing few false-positive annotations and variant calls compared to current methods, making it a tool of great interest for fully exploiting sequencing data.

  1. Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Frollo, Ivan

    2017-12-01

    The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.

  2. Tidal analysis and Arrival Process Mining Using Automatic Identification System (AIS) Data

    DTIC Science & Technology

    2017-01-01

    files, organized by location. The data were processed using the Python programming language (van Rossum and Drake 2001), the Pandas data analysis...ER D C/ CH L TR -1 7- 2 Coastal Inlets Research Program Tidal Analysis and Arrival Process Mining Using Automatic Identification System...17-2 January 2017 Tidal Analysis and Arrival Process Mining Using Automatic Identification System (AIS) Data Brandan M. Scully Coastal and

  3. Chemical vapor deposition for automatic processing of integrated circuits

    NASA Technical Reports Server (NTRS)

    Kennedy, B. W.

    1980-01-01

    Chemical vapor deposition for automatic processing of integrated circuits including the wafer carrier and loading from a receiving air track into automatic furnaces and unloading on to a sending air track is discussed. Passivation using electron beam deposited quartz is also considered.

  4. Application of automatic threshold in dynamic target recognition with low contrast

    NASA Astrophysics Data System (ADS)

    Miao, Hua; Guo, Xiaoming; Chen, Yu

    2014-11-01

    Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.

  5. The personal aircraft: Status and issues

    NASA Technical Reports Server (NTRS)

    Anders, Scott G.; Asbury, Scott C.; Brentner, Kenneth S.; Bushnell, Dennis M.; Glass, Christopher E.; Hodges, William T.; Morris, Shelby J., Jr.; Scott, Michael A.

    1994-01-01

    Paper summarizes the status of personal air transportation with emphasis upon VTOL and converticar capability. The former obviates the need for airport operations for personal aircraft whereas the latter provides both ground and air capability in the same vehicle. Fully automatic operation, ATC and navigation is stressed along with consideration of acoustic, environmental and cost issues.

  6. Automation of a laboratory particleboard press

    Treesearch

    Robert L. Geimer; Gordon H. Stevens; Richard E. Kinney

    1982-01-01

    A manually operated particleboard press was converted to a fully automatic, programable system with updated data collection capabilities. Improved control has permitted observations of very small changes in pressing variables resulting in the development of a technique capable of reducing press times by 70 percent. Accurate control of the press is obtained through an...

  7. Avatars, Virtual Reality Technology, and the U.S. Military: Emerging Policy Issues

    DTIC Science & Technology

    2008-04-09

    called “ Sentient Worldwide Simulation,” which will “mirror” real life and automatically follow real-world events in real time. Some virtual world...cities, with the final goal of creating a fully functioning virtual model of the entire world, which will be known as the Sentient Worldwide Simulation

  8. Development of German-English Machine Translation System. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winfred P.; Stachowitz, Rolf A.

    This report describes work on a pilot system for a fully automatic, high-quality translation of German scientific and technical text into English and gives the results of an experiment designed to show the system's capability to produce quality mechanical translation. The areas considered were: (1) grammar formalism, mainly involving the addition…

  9. Buried in the Warm, Warm Ground

    ERIC Educational Resources Information Center

    Ellis-Tipton, John

    2006-01-01

    Buntingsdale Infant School in Shropshire has installed an environmentally friendly heating system. The school's heating system is called a Ground Source Heat Pump (GSHP). Buntingsdale, a three-classroom infant school in a wooden demountable building, is one of the first schools in Britain to use this system. The system is fully automatic: it is…

  10. 76 FR 63635 - Extension of the Designation of Sudan for Temporary Protected Status and Automatic Extension of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-13

    ... Front. A number of issues have not been fully addressed, however, including growing poverty, economic... intercommunal violence caused civilian deaths, continued displacement of the population, and general instability... environmental and economic factors, have created one of the worst humanitarian crises in the world. Despite...

  11. 44 CFR 60.3 - Flood plain management criteria for flood-prone areas.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... improvements, that fully enclosed areas below the lowest floor that are usable solely for parking of vehicles... that they permit the automatic entry and exit of floodwaters. (6) Require that manufactured homes that... building standards. Such enclosed space shall be useable solely for parking of vehicles, building access...

  12. 44 CFR 60.3 - Flood plain management criteria for flood-prone areas.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... improvements, that fully enclosed areas below the lowest floor that are usable solely for parking of vehicles... that they permit the automatic entry and exit of floodwaters. (6) Require that manufactured homes that... building standards. Such enclosed space shall be useable solely for parking of vehicles, building access...

  13. 30 CFR 75.1909 - Nonpermissible diesel-powered equipment; design and performance requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... rail-mounted equipment, must be provided with a parking brake that holds the fully loaded equipment... work platforms must be provided with a means to ensure that the parking braking system is released... requirements of § 75.1908(a) must be provided with an automatic fire suppression system meeting the...

  14. 30 CFR 75.1909 - Nonpermissible diesel-powered equipment; design and performance requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... rail-mounted equipment, must be provided with a parking brake that holds the fully loaded equipment... work platforms must be provided with a means to ensure that the parking braking system is released... requirements of § 75.1908(a) must be provided with an automatic fire suppression system meeting the...

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gratama van Andel, H. A. F.; Venema, H. W.; Streekstra, G. J.

    For clear visualization of vessels in CT angiography (CTA) images of the head and neck using maximum intensity projection (MIP) or volume rendering (VR) bone has to be removed. In the past we presented a fully automatic method to mask the bone [matched mask bone elimination (MMBE)] for this purpose. A drawback is that vessels adjacent to bone may be partly masked as well. We propose a modification, multiscale MMBE, which reduces this problem by using images at two scales: a higher resolution than usual for image processing and a lower resolution to which the processed images are transformed formore » use in the diagnostic process. A higher in-plane resolution is obtained by the use of a sharper reconstruction kernel. The out-of-plane resolution is improved by deconvolution or by scanning with narrower collimation. The quality of the mask that is used to remove bone is improved by using images at both scales. After masking, the desired resolution for the normal clinical use of the images is obtained by blurring with Gaussian kernels of appropriate widths. Both methods (multiscale and original) were compared in a phantom study and with clinical CTA data sets. With the multiscale approach the width of the strip of soft tissue adjacent to the bone that is masked can be reduced from 1.0 to 0.2 mm without reducing the quality of the bone removal. The clinical examples show that vessels adjacent to bone are less affected and therefore better visible. Images processed with multiscale MMBE have a slightly higher noise level or slightly reduced resolution compared with images processed by the original method and the reconstruction and processing time is also somewhat increased. Nevertheless, multiscale MMBE offers a way to remove bone automatically from CT angiography images without affecting the integrity of the blood vessels. The overall image quality of MIP or VR images is substantially improved relative to images processed with the original MMBE method.« less

  16. Removal of bone in CT angiography by multiscale matched mask bone elimination.

    PubMed

    Gratama van Andel, H A F; Venema, H W; Streekstra, G J; van Straten, M; Majoie, C B L M; den Heeten, G J; Grimbergen, C A

    2007-10-01

    For clear visualization of vessels in CT angiography (CTA) images of the head and neck using maximum intensity projection (MIP) or volume rendering (VR) bone has to be removed. In the past we presented a fully automatic method to mask the bone [matched mask bone elimination (MMBE)] for this purpose. A drawback is that vessels adjacent to bone may be partly masked as well. We propose a modification, multiscale MMBE, which reduces this problem by using images at two scales: a higher resolution than usual for image processing and a lower resolution to which the processed images are transformed for use in the diagnostic process. A higher in-plane resolution is obtained by the use of a sharper reconstruction kernel. The out-of-plane resolution is improved by deconvolution or by scanning with narrower collimation. The quality of the mask that is used to remove bone is improved by using images at both scales. After masking, the desired resolution for the normal clinical use of the images is obtained by blurring with Gaussian kernels of appropriate widths. Both methods (multiscale and original) were compared in a phantom study and with clinical CTA data sets. With the multiscale approach the width of the strip of soft tissue adjacent to the bone that is masked can be reduced from 1.0 to 0.2 mm without reducing the quality of the bone removal. The clinical examples show that vessels adjacent to bone are less affected and therefore better visible. Images processed with multiscale MMBE have a slightly higher noise level or slightly reduced resolution compared with images processed by the original method and the reconstruction and processing time is also somewhat increased. Nevertheless, multiscale MMBE offers a way to remove bone automatically from CT angiography images without affecting the integrity of the blood vessels. The overall image quality of MIP or VR images is substantially improved relative to images processed with the original MMBE method.

  17. Automatic cerebrospinal fluid segmentation in non-contrast CT images using a 3D convolutional network

    NASA Astrophysics Data System (ADS)

    Patel, Ajay; van de Leemput, Sil C.; Prokop, Mathias; van Ginneken, Bram; Manniesing, Rashindra

    2017-03-01

    Segmentation of anatomical structures is fundamental in the development of computer aided diagnosis systems for cerebral pathologies. Manual annotations are laborious, time consuming and subject to human error and observer variability. Accurate quantification of cerebrospinal fluid (CSF) can be employed as a morphometric measure for diagnosis and patient outcome prediction. However, segmenting CSF in non-contrast CT images is complicated by low soft tissue contrast and image noise. In this paper we propose a state-of-the-art method using a multi-scale three-dimensional (3D) fully convolutional neural network (CNN) to automatically segment all CSF within the cranial cavity. The method is trained on a small dataset comprised of four manually annotated cerebral CT images. Quantitative evaluation of a separate test dataset of four images shows a mean Dice similarity coefficient of 0.87 +/- 0.01 and mean absolute volume difference of 4.77 +/- 2.70 %. The average prediction time was 68 seconds. Our method allows for fast and fully automated 3D segmentation of cerebral CSF in non-contrast CT, and shows promising results despite a limited amount of training data.

  18. The origins of levels-of-processing effects in a conceptual test: evidence for automatic influences of memory from the process-dissociation procedure.

    PubMed

    Bergerbest, Dafna; Goshen-Gottstein, Yonatan

    2002-12-01

    In three experiments, we explored automatic influences of memory in a conceptual memory task, as affected by a levels-of-processing (LoP) manipulation. We also explored the origins of the LoP effect by examining whether the effect emerged only when participants in the shallow condition truncated the perceptual processing (the lexical-processing hypothesis) or even when the entire word was encoded in this condition (the conceptual-processing hypothesis). Using the process-dissociation procedure and an implicit association-generation task, we found that the deep encoding condition yielded higher estimates of automatic influences than the shallow condition. In support of the conceptual processing hypothesis, the LoP effect was found even when the shallow task did not lead to truncated processing of the lexical units. We suggest that encoding for meaning is a prerequisite for automatic processing on conceptual tests of memory.

  19. Investigation of basic cognitive predictors of reading and spelling abilities in Tunisian third-grade primary school children.

    PubMed

    Batnini, Soulef; Uno, Akira

    2015-06-01

    This study investigated first the main cognitive abilities; phonological processing, visual cognition, automatization and receptive vocabulary in predicting reading and spelling abilities in Arabic. Second, we compared good/poor readers and spellers to detect the characteristics of cognitive predictors which contribute to identifying reading and spelling difficulties in Arabic speaking children. A sample of 116 Tunisian third-grade children was tested on their abilities to read and spell, phonological processing, visual cognition, automatization and receptive vocabulary. For reading, phonological processing and automatization uniquely predicted Arabic word reading and paragraph reading abilities. Automatization uniquely predicted Arabic non-word reading ability. For spelling, phonological processing was a unique predictor for Arabic word spelling ability. Furthermore, poor readers had significantly lower scores on the phonological processing test and slower reading times on the automatization test as compared with good readers. Additionally, poor spellers showed lower scores on the phonological processing test as compared with good spellers. Visual cognitive processing and receptive vocabulary were not significant cognitive predictors of Arabic reading and spelling abilities for Tunisian third grade children in this study. Our results are consistent with previous studies in alphabetic orthographies and demonstrate that phonological processing and automatization are the best cognitive predictors in detecting early literacy problems. We suggest including phonological processing and automatization tasks in screening tests and in intervention programs may help Tunisian children with poor literacy skills overcome reading and spelling difficulties in Arabic. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  20. Film grain synthesis and its application to re-graining

    NASA Astrophysics Data System (ADS)

    Schallauer, Peter; Mörzinger, Roland

    2006-01-01

    Digital film restoration and special effects compositing require more and more automatic procedures for movie regraining. Missing or inhomogeneous grain decreases perceived quality. For the purpose of grain synthesis an existing texture synthesis algorithm has been evaluated and optimized. We show that this algorithm can produce synthetic grain which is perceptually similar to a given grain template, which has high spatial and temporal variation and which can be applied to multi-spectral images. Furthermore a re-grain application framework is proposed, which synthesises based on an input grain template artificial grain and composites this together with the original image content. Due to its modular approach this framework supports manual as well as automatic re-graining applications. Two example applications are presented, one for re-graining an entire movie and one for fully automatic re-graining of image regions produced by restoration algorithms. Low computational cost of the proposed algorithms allows application in industrial grade software.

  1. Real-time control of focused ultrasound heating based on rapid MR thermometry.

    PubMed

    Vimeux, F C; De Zwart, J A; Palussiére, J; Fawaz, R; Delalande, C; Canioni, P; Grenier, N; Moonen, C T

    1999-03-01

    Real-time control of the heating procedure is essential for hyperthermia applications of focused ultrasound (FUS). The objective of this study is to demonstrate the feasibility of MRI-controlled FUS. An automatic control system was developed using a dedicated interface between the MR system control computer and the FUS wave generator. Two algorithms were used to regulate FUS power to maintain the focal point temperature at a desired level. Automatic control of FUS power level was demonstrated ex vivo at three target temperature levels (increase of 5 degrees C, 10 degrees C, and 30 degrees C above room temperature) during 30-minute hyperthermic periods. Preliminary in vivo results on rat leg muscle confirm that necrosis estimate, calculated on-line during FUS sonication, allows prediction of tissue damage. CONCLUSIONS. The feasibility of fully automatic FUS control based on MRI thermometry has been demonstrated.

  2. A two-dimensional air-to-air combat game - Toward an air-combat advisory system

    NASA Technical Reports Server (NTRS)

    Neuman, Frank

    1987-01-01

    Air-to-air combat is modeled as a discrete differential game, and by constraining the game to searching for the best guidance laws from the sets of those considered for each opponent, feedback and outcome charts are obtained which can be used to turn one of the automatic opponents into an intelligent opponent against a human pilot. A one-on-one two-dimensional fully automatic, or manned versus automatic, air-to-air combat game has been designed which includes both attack and evasion alternatives for both aircraft. Guidance law selection occurs by flooding the initial-condition space with four simulated fights for each initial condition, depicting the various attack/evasion strategies for the two opponents, and recording the outcomes. For each initial condition, the minimax method from differential games is employed to determine the best choice from the available strategies.

  3. PRESBYOPIA OPTOMETRY METHOD BASED ON DIOPTER REGULATION AND CHARGE COUPLE DEVICE IMAGING TECHNOLOGY.

    PubMed

    Zhao, Q; Wu, X X; Zhou, J; Wang, X; Liu, R F; Gao, J

    2015-01-01

    With the development of photoelectric technology and single-chip microcomputer technology, objective optometry, also known as automatic optometry, is becoming precise. This paper proposed a presbyopia optometry method based on diopter regulation and Charge Couple Device (CCD) imaging technology and, in the meantime, designed a light path that could measure the system. This method projects a test figure to the eye ground and then the reflected image from the eye ground is detected by CCD. The image is then automatically identified by computer and the far point and near point diopters are determined to calculate lens parameter. This is a fully automatic objective optometry method which eliminates subjective factors of the tested subject. Furthermore, it can acquire the lens parameter of presbyopia accurately and quickly and can be used to measure the lens parameter of hyperopia, myopia and astigmatism.

  4. Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.

    PubMed

    Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana

    2017-07-01

    Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.

  5. A comparison of conscious and automatic memory processes for picture and word stimuli: a process dissociation analysis.

    PubMed

    McBride, Dawn M; Anne Dosher, Barbara

    2002-09-01

    Four experiments were conducted to evaluate explanations of picture superiority effects previously found for several tasks. In a process dissociation procedure (Jacoby, 1991) with word stem completion, picture fragment completion, and category production tasks, conscious and automatic memory processes were compared for studied pictures and words with an independent retrieval model and a generate-source model. The predictions of a transfer appropriate processing account of picture superiority were tested and validated in "process pure" latent measures of conscious and unconscious, or automatic and source, memory processes. Results from both model fits verified that pictures had a conceptual (conscious/source) processing advantage over words for all tasks. The effects of perceptual (automatic/word generation) compatibility depended on task type, with pictorial tasks favoring pictures and linguistic tasks favoring words. Results show support for an explanation of the picture superiority effect that involves an interaction of encoding and retrieval processes.

  6. Comparison of Landsat-8, ASTER and Sentinel 1 satellite remote sensing data in automatic lineaments extraction: A case study of Sidi Flah-Bouskour inlier, Moroccan Anti Atlas

    NASA Astrophysics Data System (ADS)

    Adiri, Zakaria; El Harti, Abderrazak; Jellouli, Amine; Lhissou, Rachid; Maacha, Lhou; Azmi, Mohamed; Zouhair, Mohamed; Bachaoui, El Mostafa

    2017-12-01

    Certainly, lineament mapping occupies an important place in several studies, including geology, hydrogeology and topography etc. With the help of remote sensing techniques, lineaments can be better identified due to strong advances in used data and methods. This allowed exceeding the usual classical procedures and achieving more precise results. The aim of this work is the comparison of ASTER, Landsat-8 and Sentinel 1 data sensors in automatic lineament extraction. In addition to image data, the followed approach includes the use of the pre-existing geological map, the Digital Elevation Model (DEM) as well as the ground truth. Through a fully automatic approach consisting of a combination of edge detection algorithm and line-linking algorithm, we have found the optimal parameters for automatic lineament extraction in the study area. Thereafter, the comparison and the validation of the obtained results showed that the Sentinel 1 data are more efficient in restitution of lineaments. This indicates the performance of the radar data compared to those optical in this kind of study.

  7. Analysis of Technique to Extract Data from the Web for Improved Performance

    NASA Astrophysics Data System (ADS)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  8. Markov random field based automatic image alignment for electron tomography.

    PubMed

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  9. Manual vs. automatic capture management in implantable cardioverter defibrillators and cardiac resynchronization therapy defibrillators.

    PubMed

    Murgatroyd, Francis D; Helmling, Erhard; Lemke, Bernd; Eber, Bernd; Mewis, Christian; van der Meer-Hensgens, Judith; Chang, Yanping; Khalameizer, Vladimir; Katz, Amos

    2010-06-01

    The Secura ICD and Consulta CRT-D are the first defibrillators to have automatic right atrial (RA), right ventricular (RV), and left ventricular (LV) capture management (CM). Complete CM was evaluated in an implantable cardioverter defibrillator (ICD) population. Two prospective clinical studies were conducted in 28 centres in Europe and Israel. Automatic CM data were compared with manual threshold measurements, the CM applicability was determined, and adjustments to pacing outputs were analysed. In total, 160 patients [age 64.6 +/- 10.4 years, 77% male, 80 ICD and 80 cardiac resynchronization therapy defibrillator (CRT-D)] were included. The differences between automatic and manual measurements were 2.5 V) due to raised RA threshold in seven (4.4%), high RV threshold in nine (5.6%), and high LV threshold in three patients (3.8%). All high threshold detections and all automatic modulations of pacing output were adjudicated appropriate. Complete CM adjusts pacing output appropriately, permitting a reduction in office visits while it may maximize device longevity. The study was registered at ClinicalTrials.gov identifiers: NCT00526227 and NCT00526162.

  10. Intended actions and unexpected outcomes: automatic and controlled processing in a rapid motor task

    PubMed Central

    Cheyne, Douglas O.; Ferrari, Paul; Cheyne, James A.

    2012-01-01

    Human action involves a combination of controlled and automatic behavior. These processes may interact in tasks requiring rapid response selection or inhibition, where temporal constraints preclude timely intervention by conscious, controlled processes over automatized prepotent responses. Such contexts tend to produce frequent errors, but also rapidly executed correct responses, both of which may sometimes be perceived as surprising, unintended, or “automatic”. In order to identify neural processes underlying these two aspects of cognitive control, we measured neuromagnetic brain activity in 12 right-handed subjects during manual responses to rapidly presented digits, with an infrequent target digit that required switching response hand (bimanual task) or response finger (unimanual task). Automaticity of responding was evidenced by response speeding (shorter response times) prior to both failed and fast correct switches. Consistent with this automaticity interpretation of fast correct switches, we observed bilateral motor preparation, as indexed by suppression of beta band (15–30 Hz) oscillations in motor cortex, prior to processing of the switch cue in the bimanual task. In contrast, right frontal theta activity (4–8 Hz) accompanying correct switch responses began after cue onset, suggesting that it reflected controlled inhibition of the default response. Further, this activity was reduced on fast correct switch trials suggesting a more automatic mode of inhibitory control. We also observed post-movement (event-related negativity) ERN-like responses and theta band increases in medial and anterior frontal regions that were significantly larger on error trials, and may reflect a combination of error and delayed inhibitory signals. We conclude that both automatic and controlled processes are engaged in parallel during rapid motor tasks, and that the relative strength and timing of these processes may underlie both optimal task performance and subjective experiences of automaticity or control. PMID:22912612

  11. Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features.

    PubMed

    Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L

    2015-11-18

    Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.

  12. Fully automatic detection of deep white matter T1 hypointense lesions in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Spies, Lothar; Tewes, Anja; Suppa, Per; Opfer, Roland; Buchert, Ralph; Winkler, Gerhard; Raji, Alaleh

    2013-12-01

    A novel method is presented for fully automatic detection of candidate white matter (WM) T1 hypointense lesions in three-dimensional high-resolution T1-weighted magnetic resonance (MR) images. By definition, T1 hypointense lesions have similar intensity as gray matter (GM) and thus appear darker than surrounding normal WM in T1-weighted images. The novel method uses a standard classification algorithm to partition T1-weighted images into GM, WM and cerebrospinal fluid (CSF). As a consequence, T1 hypointense lesions are assigned an increased GM probability by the standard classification algorithm. The GM component image of a patient is then tested voxel-by-voxel against GM component images of a normative database of healthy individuals. Clusters (≥0.1 ml) of significantly increased GM density within a predefined mask of deep WM are defined as lesions. The performance of the algorithm was assessed on voxel level by a simulation study. A maximum dice similarity coefficient of 60% was found for a typical T1 lesion pattern with contrasts ranging from WM to cortical GM, indicating substantial agreement between ground truth and automatic detection. Retrospective application to 10 patients with multiple sclerosis demonstrated that 93 out of 96 T1 hypointense lesions were detected. On average 3.6 false positive T1 hypointense lesions per patient were found. The novel method is promising to support the detection of hypointense lesions in T1-weighted images which warrants further evaluation in larger patient samples.

  13. Area- and energy-efficient CORDIC accelerators in deep sub-micron CMOS technologies

    NASA Astrophysics Data System (ADS)

    Vishnoi, U.; Noll, T. G.

    2012-09-01

    The COordinate Rotate DIgital Computer (CORDIC) algorithm is a well known versatile approach and is widely applied in today's SoCs for especially but not restricted to digital communications. Dedicated CORDIC blocks can be implemented in deep sub-micron CMOS technologies at very low area and energy costs and are attractive to be used as hardware accelerators for Application Specific Instruction Processors (ASIPs). Thereby, overcoming the well known energy vs. flexibility conflict. Optimizing Global Navigation Satellite System (GNSS) receivers to reduce the hardware complexity is an important research topic at present. In such receivers CORDIC accelerators can be used for digital baseband processing (fixed-point) and in Position-Velocity-Time estimation (floating-point). A micro architecture well suited to such applications is presented. This architecture is parameterized according to the wordlengths as well as the number of iterations and can be easily extended for floating point data format. Moreover, area can be traded for throughput by partially or even fully unrolling the iterations, whereby the degree of pipelining is organized with one CORDIC iteration per cycle. From the architectural description, the macro layout can be generated fully automatically using an in-house datapath generator tool. Since the adders and shifters play an important role in optimizing the CORDIC block, they must be carefully optimized for high area and energy efficiency in the underlying technology. So, for this purpose carry-select adders and logarithmic shifters have been chosen. Device dimensioning was automatically optimized with respect to dynamic and static power, area and performance using the in-house tool. The fully sequential CORDIC block for fixed-point digital baseband processing features a wordlength of 16 bits, requires 5232 transistors, which is implemented in a 40-nm CMOS technology and occupies a silicon area of 1560 μm2 only. Maximum clock frequency from circuit simulation of extracted netlist is 768 MHz under typical, and 463 MHz under worst case technology and application corner conditions, respectively. Simulated dynamic power dissipation is 0.24 uW MHz-1 at 0.9 V; static power is 38 uW in slow corner, 65 uW in typical corner and 518 uW in fast corner, respectively. The latter can be reduced by 43% in a 40-nm CMOS technology using 0.5 V reverse-backbias. These features are compared with the results from different design styles as well as with an implementation in 28-nm CMOS technology. It is interesting that in the latter case area scales as expected, but worst case performance and energy do not scale well anymore.

  14. Distributed automatic control of technological processes in conditions of weightlessness

    NASA Technical Reports Server (NTRS)

    Kukhtenko, A. I.; Merkulov, V. I.; Samoylenko, Y. I.; Ladikov-Royev, Y. P.

    1986-01-01

    Some problems associated with the automatic control of liquid metal and plasma systems under conditions of weightlessness are examined, with particular reference to the problem of stability of liquid equilibrium configurations. The theoretical fundamentals of automatic control of processes in electrically conducting continuous media are outlined, and means of using electromagnetic fields for simulating technological processes in a space environment are discussed.

  15. Multi-scale curvature for automated identification of glaciated mountain landscapes

    NASA Astrophysics Data System (ADS)

    Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David R.; Schrott, Lothar

    2014-03-01

    Erosion by glacial and fluvial processes shapes mountain landscapes in a long-recognized and characteristic way. Upland valleys incised by fluvial processes typically have a V-shaped cross-section with uniform and moderately steep slopes, whereas glacial valleys tend to have a U-shaped profile with a changing slope gradient. We present a novel regional approach to automatically differentiate between fluvial and glacial mountain landscapes based on the relation of multi-scale curvature and drainage area. Sample catchments are delineated and multiple moving window sizes are used to calculate per-cell curvature over a variety of scales ranging from the vicinity of the flow path at the valley bottom to catchment sections fully including valley sides. Single-scale curvature can take similar values for glaciated and non-glaciated catchments but a comparison of multi-scale curvature leads to different results according to the typical cross-sectional shapes. To adapt these differences for automated classification of mountain landscapes into areas with V- and U-shaped valleys, curvature values are correlated with drainage area and a new and simple morphometric parameter, the Difference of Minimum Curvature (DMC), is developed. At three study sites in the western United States the DMC thresholds determined from catchment analysis are used to automatically identify 5 × 5 km quadrats of glaciated and non-glaciated landscapes and the distinctions are validated by field-based geological and geomorphological maps. Our results demonstrate that DMC is a good predictor of glacial imprint, allowing automated delineation of glacially and fluvially incised mountain landscapes.

  16. Preliminary Evidence for an Automatic Link between Sex and Power among Men Who Molest Children

    ERIC Educational Resources Information Center

    Kamphuis, Jan H.; De Ruiter, Corine; Janssen, Bas; Spiering, Mark

    2005-01-01

    Understanding critical motivational processes of sexual offenders may ultimately provide important clues to more effective treatments. Implicit, automatic cognitive processes have received minimal attention; however, a lexical decision experiment revealed automatic links between the concepts of power and sex among participants who self-reported…

  17. Event-Related Potential Evidence that Automatic Recollection Can Be Voluntarily Avoided

    ERIC Educational Resources Information Center

    Bergstrom, Zara M.; de Fockert, Jan; Richardson-Klavehn, Alan

    2009-01-01

    Voluntary control processes can be recruited to facilitate recollection in situations where a retrieval cue fails to automatically bring to mind a desired episodic memory. We investigated whether voluntary control processes can also stop recollection of unwanted memories that would otherwise have been automatically recollected. Participants were…

  18. Automatic Co-Registration of QuickBird Data for Change Detection Applications

    NASA Technical Reports Server (NTRS)

    Bryant, Nevin A.; Logan, Thomas L.; Zobrist, Albert L.

    2006-01-01

    This viewgraph presentation reviews the use Automatic Fusion of Image Data System (AFIDS) for Automatic Co-Registration of QuickBird Data to ascertain if changes have occurred in images. The process is outlined, and views from Iraq and Los Angelels are shown to illustrate the process.

  19. The algorithm for automatic detection of the calibration object

    NASA Astrophysics Data System (ADS)

    Artem, Kruglov; Irina, Ugfeld

    2017-06-01

    The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.

  20. A hardware experimental platform for neural circuits in the auditory cortex

    NASA Astrophysics Data System (ADS)

    Rodellar-Biarge, Victoria; García-Dominguez, Pablo; Ruiz-Rizaldos, Yago; Gómez-Vilda, Pedro

    2011-05-01

    Speech processing in the human brain is a very complex process far from being fully understood although much progress has been done recently. Neuromorphic Speech Processing is a new research orientation in bio-inspired systems approach to find solutions to automatic treatment of specific problems (recognition, synthesis, segmentation, diarization, etc) which can not be adequately solved using classical algorithms. In this paper a neuromorphic speech processing architecture is presented. The systematic bottom-up synthesis of layered structures reproduce the dynamic feature detection of speech related to plausible neural circuits which work as interpretation centres located in the Auditory Cortex. The elementary model is based on Hebbian neuron-like units. For the computation of the architecture a flexible framework is proposed in the environment of Matlab®/Simulink®/HDL, which allows building models in different description styles, complexity and implementation levels. It provides a flexible platform for experimenting on the influence of the number of neurons and interconnections, in the precision of the results and in performance evaluation. The experimentation with different architecture configurations may help both in better understanding how neural circuits may work in the brain as well as in how speech processing can benefit from this understanding.

  1. Detection and quantification of flow consistency in business process models.

    PubMed

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara

    2018-01-01

    Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.

  2. Radiation Planning Assistant - A Streamlined, Fully Automated Radiotherapy Treatment Planning System

    PubMed Central

    Court, Laurence E.; Kisling, Kelly; McCarroll, Rachel; Zhang, Lifei; Yang, Jinzhong; Simonds, Hannah; du Toit, Monique; Trauernicht, Chris; Burger, Hester; Parkes, Jeannette; Mejia, Mike; Bojador, Maureen; Balter, Peter; Branco, Daniela; Steinmann, Angela; Baltz, Garrett; Gay, Skylar; Anderson, Brian; Cardenas, Carlos; Jhingran, Anuja; Shaitelman, Simona; Bogler, Oliver; Schmeller, Kathleen; Followill, David; Howell, Rebecca; Nelson, Christopher; Peterson, Christine; Beadle, Beth

    2018-01-01

    The Radiation Planning Assistant (RPA) is a system developed for the fully automated creation of radiotherapy treatment plans, including volume-modulated arc therapy (VMAT) plans for patients with head/neck cancer and 4-field box plans for patients with cervical cancer. It is a combination of specially developed in-house software that uses an application programming interface to communicate with a commercial radiotherapy treatment planning system. It also interfaces with a commercial secondary dose verification software. The necessary inputs to the system are a Treatment Plan Order, approved by the radiation oncologist, and a simulation computed tomography (CT) image, approved by the radiographer. The RPA then generates a complete radiotherapy treatment plan. For the cervical cancer treatment plans, no additional user intervention is necessary until the plan is complete. For head/neck treatment plans, after the normal tissue and some of the target structures are automatically delineated on the CT image, the radiation oncologist must review the contours, making edits if necessary. They also delineate the gross tumor volume. The RPA then completes the treatment planning process, creating a VMAT plan. Finally, the completed plan must be reviewed by qualified clinical staff. PMID:29708544

  3. Processing of Intentional and Automatic Number Magnitudes in Children Born Prematurely: Evidence From fMRI

    PubMed Central

    Klein, Elise; Moeller, Korbinian; Kiechl-Kohlendorfer, Ursula; Kremser, Christian; Starke, Marc; Cohen Kadosh, Roi; Pupp-Peglow, Ulrike; Schocke, Michael; Kaufmann, Liane

    2014-01-01

    This study examined the neural correlates of intentional and automatic number processing (indexed by number comparison and physical Stroop task, respectively) in 6- and 7-year-old children born prematurely. Behavioral results revealed significant numerical distance and size congruity effects. Imaging results disclosed (1) largely overlapping fronto-parietal activation for intentional and automatic number processing, (2) a frontal to parietal shift of activation upon considering the risk factors gestational age and birth weight, and (3) a task-specific link between math proficiency and functional magnetic resonance imaging (fMRI) signal within distinct regions of the parietal lobes—indicating commonalities but also specificities of intentional and automatic number processing. PMID:25090014

  4. Automatically processed alpha-track radon monitor

    DOEpatents

    Langner, Jr., G. Harold

    1993-01-01

    An automatically processed alpha-track radon monitor is provided which includes a housing having an aperture allowing radon entry, and a filter that excludes the entry of radon daughters into the housing. A flexible track registration material is located within the housing that records alpha-particle emissions from the decay of radon and radon daughters inside the housing. The flexible track registration material is capable of being spliced such that the registration material from a plurality of monitors can be spliced into a single strip to facilitate automatic processing of the registration material from the plurality of monitors. A process for the automatic counting of radon registered by a radon monitor is also provided.

  5. Automatically processed alpha-track radon monitor

    DOEpatents

    Langner, G.H. Jr.

    1993-01-12

    An automatically processed alpha-track radon monitor is provided which includes a housing having an aperture allowing radon entry, and a filter that excludes the entry of radon daughters into the housing. A flexible track registration material is located within the housing that records alpha-particle emissions from the decay of radon and radon daughters inside the housing. The flexible track registration material is capable of being spliced such that the registration material from a plurality of monitors can be spliced into a single strip to facilitate automatic processing of the registration material from the plurality of monitors. A process for the automatic counting of radon registered by a radon monitor is also provided.

  6. Fully-integrated framework for the segmentation and registration of the spinal cord white and gray matter.

    PubMed

    Dupont, Sara M; De Leener, Benjamin; Taso, Manuel; Le Troter, Arnaud; Nadeau, Sylvie; Stikov, Nikola; Callot, Virginie; Cohen-Adad, Julien

    2017-04-15

    The spinal cord white and gray matter can be affected by various pathologies such as multiple sclerosis, amyotrophic lateral sclerosis or trauma. Being able to precisely segment the white and gray matter could help with MR image analysis and hence be useful in further understanding these pathologies, and helping with diagnosis/prognosis and drug development. Up to date, white/gray matter segmentation has mostly been done manually, which is time consuming, induces a bias related to the rater and prevents large-scale multi-center studies. Recently, few methods have been proposed to automatically segment the spinal cord white and gray matter. However, no single method exists that combines the following criteria: (i) fully automatic, (ii) works on various MRI contrasts, (iii) robust towards pathology and (iv) freely available and open source. In this study we propose a multi-atlas based method for the segmentation of the spinal cord white and gray matter that addresses the previous limitations. Moreover, to study the spinal cord morphology, atlas-based approaches are increasingly used. These approaches rely on the registration of a spinal cord template to an MR image, however the registration usually doesn't take into account the spinal cord internal structure and thus lacks accuracy. In this study, we propose a new template registration framework that integrates the white and gray matter segmentation to account for the specific gray matter shape of each individual subject. Validation of segmentation was performed in 24 healthy subjects using T 2 * -weighted images, in 8 healthy subjects using diffusion weighted images (exhibiting inverted white-to-gray matter contrast compared to T 2 *-weighted), and in 5 patients with spinal cord injury. The template registration was validated in 24 subjects using T 2 *-weighted data. Results of automatic segmentation on T 2 *-weighted images was in close correspondence with the manual segmentation (Dice coefficient in the white/gray matter of 0.91/0.71 respectively). Similarly, good results were obtained in data with inverted contrast (diffusion-weighted image) and in patients. When compared to the classical template registration framework, the proposed framework that accounts for gray matter shape significantly improved the quality of the registration (comparing Dice coefficient in gray matter: p=9.5×10 -6 ). While further validation is needed to show the benefits of the new registration framework in large cohorts and in a variety of patients, this study provides a fully-integrated tool for quantitative assessment of white/gray matter morphometry and template-based analysis. All the proposed methods are implemented in the Spinal Cord Toolbox (SCT), an open-source software for processing spinal cord multi-parametric MRI data. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Improving GPR Surveys Productivity by Array Technology and Fully Automated Processing

    NASA Astrophysics Data System (ADS)

    Morello, Marco; Ercoli, Emanuele; Mazzucchelli, Paolo; Cottino, Edoardo

    2016-04-01

    The realization of network infrastructures with lower environmental impact and the tendency to use digging technologies less invasive in terms of time and space of road occupation and restoration play a key-role in the development of communication networks. However, pre-existing buried utilities must be detected and located in the subsurface, to exploit the high productivity of modern digging apparatus. According to SUE quality level B+ both position and depth of subsurface utilities must be accurately estimated, demanding for 3D GPR surveys. In fact, the advantages of 3D GPR acquisitions (obtained either by multiple 2D recordings or by an antenna array) versus 2D acquisitions are well-known. Nonetheless, the amount of acquired data for such 3D acquisitions does not usually allow to complete processing and interpretation directly in field and in real-time, thus limiting the overall efficiency of the GPR acquisition. As an example, the "low impact mini-trench "technique (addressed in ITU - International Telecommunication Union - L.83 recommendation) requires that non-destructive mapping of buried services enhances its productivity to match the improvements of new digging equipment. Nowadays multi-antenna and multi-pass GPR acquisitions demand for new processing techniques that can obtain high quality subsurface images, taking full advantage of 3D data: the development of a fully automated and real-time 3D GPR processing system plays a key-role in overall optical network deployment profitability. Furthermore, currently available computing power suggests the feasibility of processing schemes that incorporate better focusing algorithms. A novel processing scheme, whose goal is the automated processing and detection of buried targets that can be applied in real-time to 3D GPR array systems, has been developed and fruitfully tested with two different GPR arrays (16 antennas, 900 MHz central frequency, and 34 antennas, 600 MHz central frequency). The proposed processing scheme take advantage of 3D data multiplicity by continuous real time data focusing. Pre-stack reflection angle gathers G(x, θ; v) are computed at nv different velocities (by the mean of Kirchhoff depth-migration kernels, that can naturally cope with any acquisition pattern and handle irregular sampling issues). It must be noted that the analysis of pre-stack reflection angle gathers plays a key-role in automated detection: targets are identified and the best local propagation velocities are recovered through a correlation estimate computed for all the nv reflection angle gathers. Indeed, the data redundancy of 3D GPR acquisitions highly improves the proposed automatic detection reliability. The goal of real-time automated processing has been pursued without the need of specific high performance processing hardware (a simple laptop is required). Moreover, the automatization of the entire surveying process allows to obtain high quality and repeatable results without the need of skilled interpreters. The proposed acquisition procedure has been extensively tested: more than 100 Km of acquired data prove the feasibility of the proposed approach.

  8. "Ask Ernö": a self-learning tool for assignment and prediction of nuclear magnetic resonance spectra.

    PubMed

    Castillo, Andrés M; Bernal, Andrés; Dieden, Reiner; Patiny, Luc; Wist, Julien

    2016-01-01

    We present "Ask Ernö", a self-learning system for the automatic analysis of NMR spectra, consisting of integrated chemical shift assignment and prediction tools. The output of the automatic assignment component initializes and improves a database of assigned protons that is used by the chemical shift predictor. In turn, the predictions provided by the latter facilitate improvement of the assignment process. Iteration on these steps allows Ask Ernö to improve its ability to assign and predict spectra without any prior knowledge or assistance from human experts. This concept was tested by training such a system with a dataset of 2341 molecules and their (1)H-NMR spectra, and evaluating the accuracy of chemical shift predictions on a test set of 298 partially assigned molecules (2007 assigned protons). After 10 iterations, Ask Ernö was able to decrease its prediction error by 17 %, reaching an average error of 0.265 ppm. Over 60 % of the test chemical shifts were predicted within 0.2 ppm, while only 5 % still presented a prediction error of more than 1 ppm. Ask Ernö introduces an innovative approach to automatic NMR analysis that constantly learns and improves when provided with new data. Furthermore, it completely avoids the need for manually assigned spectra. This system has the potential to be turned into a fully autonomous tool able to compete with the best alternatives currently available.Graphical abstractSelf-learning loop. Any progress in the prediction (forward problem) will improve the assignment ability (reverse problem) and vice versa.

  9. Computer-based planning of optimal donor sites for autologous osseous grafts

    NASA Astrophysics Data System (ADS)

    Krol, Zdzislaw; Chlebiej, Michal; Zerfass, Peter; Zeilhofer, Hans-Florian U.; Sader, Robert; Mikolajczak, Pawel; Keeve, Erwin

    2002-05-01

    Bone graft surgery is often necessary for reconstruction of craniofacial defects after trauma, tumor, infection or congenital malformation. In this operative technique the removed or missing bone segment is filled with a bone graft. The mainstay of the craniofacial reconstruction rests with the replacement of the defected bone by autogeneous bone grafts. To achieve sufficient incorporation of the autograft into the host bone, precise planning and simulation of the surgical intervention is required. The major problem is to determine as accurately as possible the donor site where the graft should be dissected from and to define the shape of the desired transplant. A computer-aided method for semi-automatic selection of optimal donor sites for autografts in craniofacial reconstructive surgery has been developed. The non-automatic step of graft design and constraint setting is followed by a fully automatic procedure to find the best fitting position. In extension to preceding work, a new optimization approach based on the Levenberg-Marquardt method has been implemented and embedded into our computer-based surgical planning system. This new technique enables, once the pre-processing step has been performed, selection of the optimal donor site in time less than one minute. The method has been applied during surgery planning step in more than 20 cases. The postoperative observations have shown that functional results, such as speech and chewing ability as well as restoration of bony continuity were clearly better compared to conventionally planned operations. Moreover, in most cases the duration of the surgical interventions has been distinctly reduced.

  10. Intra-operative adjustment of standard planes in C-arm CT image data.

    PubMed

    Brehler, Michael; Görres, Joseph; Franke, Jochen; Barth, Karl; Vetter, Sven Y; Grützner, Paul A; Meinzer, Hans-Peter; Wolf, Ivo; Nabers, Diana

    2016-03-01

    With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.

  11. Are the Literacy Difficulties That Characterize Developmental Dyslexia Associated with a Failure to Integrate Letters and Speech Sounds?

    ERIC Educational Resources Information Center

    Nash, Hannah M.; Gooch, Debbie; Hulme, Charles; Mahajan, Yatin; McArthur, Genevieve; Steinmetzger, Kurt; Snowling, Margaret J.

    2017-01-01

    The "automatic letter-sound integration hypothesis" (Blomert, [Blomert, L., 2011]) proposes that dyslexia results from a failure to fully integrate letters and speech sounds into automated audio-visual objects. We tested this hypothesis in a sample of English-speaking children with dyslexic difficulties (N = 13) and samples of…

  12. Computer-assisted liver graft steatosis assessment via learning-based texture analysis.

    PubMed

    Moccia, Sara; Mattos, Leonardo S; Patrini, Ilaria; Ruperti, Michela; Poté, Nicolas; Dondero, Federica; Cauchy, François; Sepulveda, Ailton; Soubrane, Olivier; De Momi, Elena; Diaspro, Alberto; Cesaretti, Manuela

    2018-05-23

    Fast and accurate graft hepatic steatosis (HS) assessment is of primary importance for lowering liver dysfunction risks after transplantation. Histopathological analysis of biopsied liver is the gold standard for assessing HS, despite being invasive and time consuming. Due to the short time availability between liver procurement and transplantation, surgeons perform HS assessment through clinical evaluation (medical history, blood tests) and liver texture visual analysis. Despite visual analysis being recognized as challenging in the clinical literature, few efforts have been invested to develop computer-assisted solutions for HS assessment. The objective of this paper is to investigate the automatic analysis of liver texture with machine learning algorithms to automate the HS assessment process and offer support for the surgeon decision process. Forty RGB images of forty different donors were analyzed. The images were captured with an RGB smartphone camera in the operating room (OR). Twenty images refer to livers that were accepted and 20 to discarded livers. Fifteen randomly selected liver patches were extracted from each image. Patch size was [Formula: see text]. This way, a balanced dataset of 600 patches was obtained. Intensity-based features (INT), histogram of local binary pattern ([Formula: see text]), and gray-level co-occurrence matrix ([Formula: see text]) were investigated. Blood-sample features (Blo) were included in the analysis, too. Supervised and semisupervised learning approaches were investigated for feature classification. The leave-one-patient-out cross-validation was performed to estimate the classification performance. With the best-performing feature set ([Formula: see text]) and semisupervised learning, the achieved classification sensitivity, specificity, and accuracy were 95, 81, and 88%, respectively. This research represents the first attempt to use machine learning and automatic texture analysis of RGB images from ubiquitous smartphone cameras for the task of graft HS assessment. The results suggest that is a promising strategy to develop a fully automatic solution to assist surgeons in HS assessment inside the OR.

  13. Development of EnergyPlus Utility to Batch Simulate Building Energy Performance on a National Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valencia, Jayson F.; Dirks, James A.

    2008-08-29

    EnergyPlus is a simulation program that requires a large number of details to fully define and model a building. Hundreds or even thousands of lines in a text file are needed to run the EnergyPlus simulation depending on the size of the building. To manually create these files is a time consuming process that would not be practical when trying to create input files for thousands of buildings needed to simulate national building energy performance. To streamline the process needed to create the input files for EnergyPlus, two methods were created to work in conjunction with the National Renewable Energymore » Laboratory (NREL) Preprocessor; this reduced the hundreds of inputs needed to define a building in EnergyPlus to a small set of high-level parameters. The first method uses Java routines to perform all of the preprocessing on a Windows machine while the second method carries out all of the preprocessing on the Linux cluster by using an in-house built utility called Generalized Parametrics (GPARM). A comma delimited (CSV) input file is created to define the high-level parameters for any number of buildings. Each method then takes this CSV file and uses the data entered for each parameter to populate an extensible markup language (XML) file used by the NREL Preprocessor to automatically prepare EnergyPlus input data files (idf) using automatic building routines and macro templates. Using a Linux utility called “make”, the idf files can then be automatically run through the Linux cluster and the desired data from each building can be aggregated into one table to be analyzed. Creating a large number of EnergyPlus input files results in the ability to batch simulate building energy performance and scale the result to national energy consumption estimates.« less

  14. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.

  15. Evolutionary game dynamics of controlled and automatic decision-making

    NASA Astrophysics Data System (ADS)

    Toupo, Danielle F. P.; Strogatz, Steven H.; Cohen, Jonathan D.; Rand, David G.

    2015-07-01

    We integrate dual-process theories of human cognition with evolutionary game theory to study the evolution of automatic and controlled decision-making processes. We introduce a model in which agents who make decisions using either automatic or controlled processing compete with each other for survival. Agents using automatic processing act quickly and so are more likely to acquire resources, but agents using controlled processing are better planners and so make more effective use of the resources they have. Using the replicator equation, we characterize the conditions under which automatic or controlled agents dominate, when coexistence is possible and when bistability occurs. We then extend the replicator equation to consider feedback between the state of the population and the environment. Under conditions in which having a greater proportion of controlled agents either enriches the environment or enhances the competitive advantage of automatic agents, we find that limit cycles can occur, leading to persistent oscillations in the population dynamics. Critically, however, these limit cycles only emerge when feedback occurs on a sufficiently long time scale. Our results shed light on the connection between evolution and human cognition and suggest necessary conditions for the rise and fall of rationality.

  16. Evolutionary game dynamics of controlled and automatic decision-making.

    PubMed

    Toupo, Danielle F P; Strogatz, Steven H; Cohen, Jonathan D; Rand, David G

    2015-07-01

    We integrate dual-process theories of human cognition with evolutionary game theory to study the evolution of automatic and controlled decision-making processes. We introduce a model in which agents who make decisions using either automatic or controlled processing compete with each other for survival. Agents using automatic processing act quickly and so are more likely to acquire resources, but agents using controlled processing are better planners and so make more effective use of the resources they have. Using the replicator equation, we characterize the conditions under which automatic or controlled agents dominate, when coexistence is possible and when bistability occurs. We then extend the replicator equation to consider feedback between the state of the population and the environment. Under conditions in which having a greater proportion of controlled agents either enriches the environment or enhances the competitive advantage of automatic agents, we find that limit cycles can occur, leading to persistent oscillations in the population dynamics. Critically, however, these limit cycles only emerge when feedback occurs on a sufficiently long time scale. Our results shed light on the connection between evolution and human cognition and suggest necessary conditions for the rise and fall of rationality.

  17. Fully automatic time-window selection using machine learning for global adjoint tomography

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Hill, J.; Lei, W.; Lefebvre, M. P.; Bozdag, E.; Komatitsch, D.; Tromp, J.

    2017-12-01

    Selecting time windows from seismograms such that the synthetic measurements (from simulations) and measured observations are sufficiently close is indispensable in a global adjoint tomography framework. The increasing amount of seismic data collected everyday around the world demands "intelligent" algorithms for seismic window selection. While the traditional FLEXWIN algorithm can be "automatic" to some extent, it still requires both human input and human knowledge or experience, and thus is not deemed to be fully automatic. The goal of intelligent window selection is to automatically select windows based on a learnt engine that is built upon a huge number of existing windows generated through the adjoint tomography project. We have formulated the automatic window selection problem as a classification problem. All possible misfit calculation windows are classified as either usable or unusable. Given a large number of windows with a known selection mode (select or not select), we train a neural network to predict the selection mode of an arbitrary input window. Currently, the five features we extract from the windows are its cross-correlation value, cross-correlation time lag, amplitude ratio between observed and synthetic data, window length, and minimum STA/LTA value. More features can be included in the future. We use these features to characterize each window for training a multilayer perceptron neural network (MPNN). Training the MPNN is equivalent to solve a non-linear optimization problem. We use backward propagation to derive the gradient of the loss function with respect to the weighting matrices and bias vectors and use the mini-batch stochastic gradient method to iteratively optimize the MPNN. Numerical tests show that with a careful selection of the training data and a sufficient amount of training data, we are able to train a robust neural network that is capable of detecting the waveforms in an arbitrary earthquake data with negligible detection error compared to existing selection methods (e.g. FLEXWIN). We will introduce in detail the mathematical formulation of the window-selection-oriented MPNN and show very encouraging results when applying the new algorithm to real earthquake data.

  18. Figure-ground segregation: A fully nonlocal approach.

    PubMed

    Dimiccoli, Mariella

    2016-09-01

    We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. RoboDiff: combining a sample changer and goniometer for highly automated macromolecular crystallography experiments

    PubMed Central

    Nurizzo, Didier; Bowler, Matthew W.; Caserotto, Hugo; Dobias, Fabien; Giraud, Thierry; Surr, John; Guichard, Nicolas; Papp, Gergely; Guijarro, Matias; Mueller-Dieckmann, Christoph; Flot, David; McSweeney, Sean; Cipriani, Florent; Theveneau, Pascal; Leonard, Gordon A.

    2016-01-01

    Automation of the mounting of cryocooled samples is now a feature of the majority of beamlines dedicated to macromolecular crystallography (MX). Robotic sample changers have been developed over many years, with the latest designs increasing capacity, reliability and speed. Here, the development of a new sample changer deployed at the ESRF beamline MASSIF-1 (ID30A-1), based on an industrial six-axis robot, is described. The device, named RoboDiff, includes a high-capacity dewar, acts as both a sample changer and a high-accuracy goniometer, and has been designed for completely unattended sample mounting and diffraction data collection. This aim has been achieved using a high level of diagnostics at all steps of the process from mounting and characterization to data collection. The RoboDiff has been in service on the fully automated endstation MASSIF-1 at the ESRF since September 2014 and, at the time of writing, has processed more than 20 000 samples completely automatically. PMID:27487827

  20. Framework for Infectious Disease Analysis: A comprehensive and integrative multi-modeling approach to disease prediction and management.

    PubMed

    Erraguntla, Madhav; Zapletal, Josef; Lawley, Mark

    2017-12-01

    The impact of infectious disease on human populations is a function of many factors including environmental conditions, vector dynamics, transmission mechanics, social and cultural behaviors, and public policy. A comprehensive framework for disease management must fully connect the complete disease lifecycle, including emergence from reservoir populations, zoonotic vector transmission, and impact on human societies. The Framework for Infectious Disease Analysis is a software environment and conceptual architecture for data integration, situational awareness, visualization, prediction, and intervention assessment. Framework for Infectious Disease Analysis automatically collects biosurveillance data using natural language processing, integrates structured and unstructured data from multiple sources, applies advanced machine learning, and uses multi-modeling for analyzing disease dynamics and testing interventions in complex, heterogeneous populations. In the illustrative case studies, natural language processing from social media, news feeds, and websites was used for information extraction, biosurveillance, and situation awareness. Classification machine learning algorithms (support vector machines, random forests, and boosting) were used for disease predictions.

  1. A Speech Recognition-based Solution for the Automatic Detection of Mild Cognitive Impairment from Spontaneous Speech

    PubMed Central

    Tóth, László; Hoffmann, Ildikó; Gosztolya, Gábor; Vincze, Veronika; Szatlóczki, Gréta; Bánréti, Zoltán; Pákáski, Magdolna; Kálmán, János

    2018-01-01

    Background: Even today the reliable diagnosis of the prodromal stages of Alzheimer’s disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive de-cline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Methods: Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech sig-nals, first manually (using the Praat software), and then automatically, with an automatic speech recogni-tion (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. Results: The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process – that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. Conclusion: The temporal analysis of spontaneous speech can be exploited in implementing a new, auto-matic detection-based tool for screening MCI for the community. PMID:29165085

  2. A Speech Recognition-based Solution for the Automatic Detection of Mild Cognitive Impairment from Spontaneous Speech.

    PubMed

    Toth, Laszlo; Hoffmann, Ildiko; Gosztolya, Gabor; Vincze, Veronika; Szatloczki, Greta; Banreti, Zoltan; Pakaski, Magdolna; Kalman, Janos

    2018-01-01

    Even today the reliable diagnosis of the prodromal stages of Alzheimer's disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive decline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech signals, first manually (using the Praat software), and then automatically, with an automatic speech recognition (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process - that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. The temporal analysis of spontaneous speech can be exploited in implementing a new, automatic detection-based tool for screening MCI for the community. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  3. Evaluation of Earthquake Detection Performance in Terms of Quality and Speed in SEISCOMP3 Using New Modules Qceval, Npeval and Sceval

    NASA Astrophysics Data System (ADS)

    Roessler, D.; Weber, B.; Ellguth, E.; Spazier, J.

    2017-12-01

    The geometry of seismic monitoring networks, site conditions and data availability as well as monitoring targets and strategies typically impose trade-offs between data quality, earthquake detection sensitivity, false detections and alert times. Network detection capabilities typically change with alteration of the seismic noise level by human activity or by varying weather and sea conditions. To give helpful information to operators and maintenance coordinators, gempa developed a range of tools to evaluate earthquake detection and network performance including qceval, npeval and sceval. qceval is a module which analyzes waveform quality parameters in real-time and deactivates and reactivates data streams based on waveform quality thresholds for automatic processing. For example, thresholds can be defined for latency, delay, timing quality, spikes and gaps count and rms. As changes in the automatic processing have a direct influence on detection quality and speed, another tool called "npeval" was designed to calculate in real-time the expected time needed to detect and locate earthquakes by evaluating the effective network geometry. The effective network geometry is derived from the configuration of stations participating in the detection. The detection times are shown as an additional layer on the map and updated in real-time as soon as the effective network geometry changes. Yet another new tool, "sceval", is an automatic module which classifies located seismic events (Origins) in real-time. sceval evaluates the spatial distribution of the stations contributing to an Origin. It confirms or rejects the status of Origins, adds comments or leaves the Origin unclassified. The comments are passed to an additional sceval plug-in where the end user can customize event types. This unique identification of real and fake events in earthquake catalogues allows to lower network detection thresholds. In real-time monitoring situations operators can limit the processing to events with unclassified Origins, reducing their workload. Classified Origins can be treated specifically by other procedures. These modules have been calibrated and fully tested by several complex seismic monitoring networks in the region of Indonesia and Northern Chile.

  4. Automatic assembly of micro-optical components

    NASA Astrophysics Data System (ADS)

    Gengenbach, Ulrich K.

    1996-12-01

    Automatic assembly becomes an important issue as hybrid micro systems enter industrial fabrication. Moving from a laboratory scale production with manual assembly and bonding processes to automatic assembly requires a thorough re- evaluation of the design, the characteristics of the individual components and of the processes involved. Parts supply for automatic operation, sensitive and intelligent grippers adapted to size, surface and material properties of the microcomponents gain importance when the superior sensory and handling skills of a human are to be replaced by a machine. This holds in particular for the automatic assembly of micro-optical components. The paper outlines these issues exemplified at the automatic assembly of a micro-optical duplexer consisting of a micro-optical bench fabricated by the LIGA technique, two spherical lenses, a wavelength filter and an optical fiber. Spherical lenses, wavelength filter and optical fiber are supplied by third party vendors, which raises the question of parts supply for automatic assembly. The bonding processes for these components include press fit and adhesive bonding. The prototype assembly system with all relevant components e.g. handling system, parts supply, grippers and control is described. Results of first automatic assembly tests are presented.

  5. [Progress in the development of insulin pumps and their advanced automatic functions].

    PubMed

    Prázný, Martin

    2015-04-01

    Patients with type 1 diabetes are exposed to permanent burden consisting of careful glucose self-monitoring and precise insulin dosage based on measured glucose values, carbohydrates content in the food and both planned and non-planned physical activity. Erroneous insulin dosing causes frequent both hypoglycemia and hyperglycemia. Hypoglycemia is, however, the most clinically significant complication limiting the optimal diabetes control. Automatic features for insulin dosage integrated in insulin pumps are thus very important. Low glucose suspend (LGS) and Predictive Low Glucose Management (PLGM) use glucose sensor values to prevent hypoglycemia, shorten the time spent in hypoglycemic range and present further step forward to fully closed-loop system of insulin treatment.

  6. Automatic multi-organ segmentation using learning-based segmentation and level set optimization.

    PubMed

    Kohlberger, Timo; Sofka, Michal; Zhang, Jingdan; Birkbeck, Neil; Wetzl, Jens; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin

    2011-01-01

    We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.

  7. Deep convolutional neural network for prostate MR segmentation

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, Lizhi; Fei, Baowei

    2017-03-01

    Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%+/-3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.

  8. Advanced Transport Operating System (ATOPS) Flight Management/Flight Controls (FM/FC) software description

    NASA Technical Reports Server (NTRS)

    Wolverton, David A.; Dickson, Richard W.; Clinedinst, Winston C.; Slominski, Christopher J.

    1993-01-01

    The flight software developed for the Flight Management/Flight Controls (FM/FC) MicroVAX computer used on the Transport Systems Research Vehicle for Advanced Transport Operating Systems (ATOPS) research is described. The FM/FC software computes navigation position estimates, guidance commands, and those commands issued to the control surfaces to direct the aircraft in flight. Various modes of flight are provided for, ranging from computer assisted manual modes to fully automatic modes including automatic landing. A high-level system overview as well as a description of each software module comprising the system is provided. Digital systems diagrams are included for each major flight control component and selected flight management functions.

  9. Automatic Building Abstraction from Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Ley, A.; Hänsch, R.; Hellwich, O.

    2017-09-01

    Multi-view stereo has been shown to be a viable tool for the creation of realistic 3D city models. Nevertheless, it still states significant challenges since it results in dense, but noisy and incomplete point clouds when applied to aerial images. 3D city modelling usually requires a different representation of the 3D scene than these point clouds. This paper applies a fully-automatic pipeline to generate a simplified mesh from a given dense point cloud. The mesh provides a certain level of abstraction as it only consists of relatively large planar and textured surfaces. Thus, it is possible to remove noise, outlier, as well as clutter, while maintaining a high level of accuracy.

  10. fgui: A Method for Automatically Creating Graphical User Interfaces for Command-Line R Packages

    PubMed Central

    Hoffmann, Thomas J.; Laird, Nan M.

    2009-01-01

    The fgui R package is designed for developers of R packages, to help rapidly, and sometimes fully automatically, create a graphical user interface for a command line R package. The interface is built upon the Tcl/Tk graphical interface included in R. The package further facilitates the developer by loading in the help files from the command line functions to provide context sensitive help to the user with no additional effort from the developer. Passing a function as the argument to the routines in the fgui package creates a graphical interface for the function, and further options are available to tweak this interface for those who want more flexibility. PMID:21625291

  11. The Masked Semantic Priming Effect Is Task Dependent: Reconsidering the Automatic Spreading Activation Process

    ERIC Educational Resources Information Center

    de Wit, Bianca; Kinoshita, Sachiko

    2015-01-01

    Semantic priming effects are popularly explained in terms of an automatic spreading activation process, according to which the activation of a node in a semantic network spreads automatically to interconnected nodes, preactivating a semantically related word. It is expected from this account that semantic priming effects should be routinely…

  12. Count Me In! on the Automaticity of Numerosity Processing

    ERIC Educational Resources Information Center

    Naparstek, Sharon; Henik, Avishai

    2010-01-01

    Extraction of numerosity (i.e., enumeration) is an essential component of mathematical abilities. The current study asked how automatic is the processing of numerosity and whether automatic activation is task dependent. Participants were presented with displays containing a variable number of digits and were asked to pay attention to the number of…

  13. Automatic Layout Design for Power Module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ning, Puqi; Wang, Fei; Ngo, Khai

    The layout of power modules is one of the most important elements in power module design, especially for high power densities, where couplings are increased. In this paper, an automatic design process using a genetic algorithm is presented. Some practical considerations are introduced in the optimization of the layout design of the module. This paper presents a process for automatic layout design for high power density modules. Detailed GA implementations are introduced both for outer loop and inner loop. As verified by a design example, the results of the automatic design process presented here are better than those from manualmore » design and also better than the results from a popular design software. This automatic design procedure could be a major step toward improving the overall performance of future layout design.« less

  14. ERDA/NASA 100 kilowatt mod-o wind turbine operations and performance. [at the NASA Plum Brook Station, Ohio

    NASA Technical Reports Server (NTRS)

    Thomas, R. L.; Richards, T. R.

    1977-01-01

    The ERDA/NASA 100 kW Mod-0 wind turbine is operating at the NASA Plum Brook Station near Sandusky, Ohio. The operation of the wind turbine has been fully demonstrated and includes start-up, synchronization to the utility network, blade pitch control for control of power and speed, and shut-down. Also, fully automatic operation has been demonstrated by use of a remote control panel, 50 miles from the site, similar to what a utility dispatcher might use. The operation systems and experience with the wind turbine loads, electrical power and aerodynamic performance obtained from testing are described.

  15. 3D printed electromagnetic transmission and electronic structures fabricated on a single platform using advanced process integration techniques

    NASA Astrophysics Data System (ADS)

    Deffenbaugh, Paul Issac

    3D printing has garnered immense attention from many fields including in-office rapid prototyping of mechanical parts, outer-space satellite replication, garage functional firearm manufacture, and NASA rocket engine component fabrication. 3D printing allows increased design flexibility in the fabrication of electronics, microwave circuits and wireless antennas and has reached a level of maturity which allows functional parts to be printed. Much more work is necessary in order to perfect the processes of 3D printed electronics especially in the area of automation. Chapter 1 shows several finished prototypes of 3D printed electronics as well as newly developed techniques in fabrication. Little is known about the RF and microwave properties and applications of the standard materials which have been developed for 3D printing. Measurement of a wide variety of materials over a broad spectrum of frequencies up to 10 GHz using a variety of well-established measurement methods is performed throughout chapter 2. Several types of high frequency RF transmission lines are fabricated and valuable model-matched data is gathered and provided in chapter 3 for future designers' use. Of particular note is a fully 3D printed stripline which was automatically fabricated in one process on one machine. Some core advantages of 3D printing RF/microwave components include rapid manufacturing of complex, dimensionally sensitive circuits (such as antennas and filters which are often iteratively tuned) and the ability to create new devices that cannot be made using standard fabrication techniques. Chapter 4 describes an exemplary fully 3D printed curved inverted-F antenna.

  16. An integrated framework for high level design of high performance signal processing circuits on FPGAs

    NASA Astrophysics Data System (ADS)

    Benkrid, K.; Belkacemi, S.; Sukhsawas, S.

    2005-06-01

    This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.

  17. Resource depletion promotes automatic processing: implications for distribution of practice.

    PubMed

    Scheel, Matthew H

    2010-12-01

    Recent models of cognition include two processing systems: an automatic system that relies on associative learning, intuition, and heuristics, and a controlled system that relies on deliberate consideration. Automatic processing requires fewer resources and is more likely when resources are depleted. This study showed that prolonged practice on a resource-depleting mental arithmetic task promoted automatic processing on a subsequent problem-solving task, as evidenced by faster responding and more errors. Distribution of practice effects (0, 60, 120, or 180 sec. between problems) on rigidity also disappeared when groups had equal time on resource-depleting tasks. These results suggest that distribution of practice effects is reducible to resource availability. The discussion includes implications for interpreting discrepancies in the traditional distribution of practice effect.

  18. Instance-based categorization: automatic versus intentional forms of retrieval.

    PubMed

    Neal, A; Hesketh, B; Andrews, S

    1995-03-01

    Two experiments are reported which attempt to disentangle the relative contribution of intentional and automatic forms of retrieval to instance-based categorization. A financial decision-making task was used in which subjects had to decide whether a bank would approve loans for a series of applicants. Experiment 1 found that categorization was sensitive to instance-specific knowledge, even when subjects had practiced using a simple rule. L. L. Jacoby's (1991) process-dissociation procedure was adapted for use in Experiment 2 to infer the relative contribution of intentional and automatic retrieval processes to categorization decisions. The results provided (1) strong evidence that intentional retrieval processes influence categorization, and (2) some preliminary evidence suggesting that automatic retrieval processes may also contribute to categorization decisions.

  19. Design Through Manufacturing: The Solid Model - Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2003-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts which reflect every detail of the finished product. Ideally, these models should fulfill two very important functions: (1) they must provide numerical control information for automated manufacturing of precision parts, and (2) they must enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in space missions. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. The research performed during the fellowship period investigated the transition process from the solid CAD model to the FEA stress analysis model with the final goal of creating an automatic interface between the two. During the period of the fellowship a detailed multi-year program for the development of such an interface was created. The ultimate goal of this program will be the development of a fully parameterized automatic ProE/FEA translator for parts and assemblies, with the incorporation of data base management into the solution, and ultimately including computational fluid dynamics and thermal modeling in the interface.

  20. A Method Based on Artificial Intelligence To Fully Automatize The Evaluation of Bovine Blastocyst Images.

    PubMed

    Rocha, José Celso; Passalia, Felipe José; Matos, Felipe Delestro; Takahashi, Maria Beatriz; Ciniciato, Diego de Souza; Maserati, Marc Peter; Alves, Mayra Fernanda; de Almeida, Tamie Guibu; Cardoso, Bruna Lopes; Basso, Andrea Cristina; Nogueira, Marcelo Fábio Gouveia

    2017-08-09

    Morphological analysis is the standard method of assessing embryo quality; however, its inherent subjectivity tends to generate discrepancies among evaluators. Using genetic algorithms and artificial neural networks (ANNs), we developed a new method for embryo analysis that is more robust and reliable than standard methods. Bovine blastocysts produced in vitro were classified as grade 1 (excellent or good), 2 (fair), or 3 (poor) by three experienced embryologists according to the International Embryo Technology Society (IETS) standard. The images (n = 482) were subjected to automatic feature extraction, and the results were used as input for a supervised learning process. One part of the dataset (15%) was used for a blind test posterior to the fitting, for which the system had an accuracy of 76.4%. Interestingly, when the same embryologists evaluated a sub-sample (10%) of the dataset, there was only 54.0% agreement with the standard (mode for grades). However, when using the ANN to assess this sub-sample, there was 87.5% agreement with the modal values obtained by the evaluators. The presented methodology is covered by National Institute of Industrial Property (INPI) and World Intellectual Property Organization (WIPO) patents and is currently undergoing a commercial evaluation of its feasibility.

  1. Final results of the PERSEE experiment

    NASA Astrophysics Data System (ADS)

    Le Duigou, J. M.; Lozi, J.; Cassaing, F.; Houairi, K.; Sorrente, B.; Montri, J.; Jacquinod, S.; Reess, J.-M.; Pham, L.; Lhome, E.; Buey, T.; Hénault, F.; Marcotto, A.; Girard, P.; Mauclert, N.; Barillot, M.; Coudé du Foresto, V.; Ollivier, M.

    2012-07-01

    The PERSEE breadboard, developed by a consortium including CNES, IAS, LESIA, OCA, ONERA and TAS since 2005, is a nulling demonstrator that couples an infrared nulling interferometer with a formation flying simulator able to introduce realistic disturbances in the set-up. The general idea is to prove that an adequate optical design can considerably relax the constraints applying at the spacecrafts level of a future interferometric space mission like Darwin/TPF or one of its precursors. The breadboard is now fully operational and the measurements sequences are managed from a remote control room using automatic procedures. A set of excellent results were obtained in 2011. The measured polychromatic nulling depth with non polarized light is 8.8 10-6 stabilized at 9 10-8 in the 1.65-2.45 μm spectral band (37 % bandwidth) during 100 s. This result was extended to a 7h duration thanks to an automatic calibration process. The various contributors are identified and the nulling budget is now well mastered. We also proved that harmonic disturbances in the 1-100 Hz up to several ten’s of nm rms can be very efficiently corrected by a Linear Quadratic Control (LQG) if a sufficient flux is available. These results are important contributions to the feasibility of a future space based nulling interferometer.

  2. Final results of the PERSEE experiment

    NASA Astrophysics Data System (ADS)

    Le Duigou, J.-M.; Lozi, J.; Cassaing, F.; Houairi, K.; Sorrente, B.; Montri, J.; Jacquinod, S.; Réess, J.-M.; Pham, L.; Lhomé, E.; Buey, T.; Hénault, F.; Marcotto, A.; Girard, P.; Mauclert, N.; Barillot, M.; Coudé du Foresto, V.; Ollivier, M.

    2017-11-01

    The PERSEE breadboard, developed by a consortium including CNES, IAS, LESIA, OCA, ONERA and TAS since 2006, is a nulling demonstrator that couples an infrared nulling interferometer with a formation flying simulator able to introduce realistic disturbances in the set-up. The general idea is to prove that an adequate optical design can considerably release the constraints applied at the spacecrafts level of a future interferometric space mission like Darwin/TPF or one of its precursors. The breadboard is now fully operational and the measurements sequences are managed from a remote control room using automatic procedures. A set of excellent results were obtained in 2011: the measured polychromatic nulling depth with non polarized light is 8.8x10-6 stabilized at 9x10-8 in the [1.65-2.45] μm spectral band (37% bandwidth) during 100s. This result was extended to a 7h duration thanks to an automatic calibration process. The various contributors are identified and the nulling budget is now well mastered. We also proved that harmonic disturbances in the 1-100Hz up to several tens of nm rms can be very efficiently corrected by a Linear Quadratic Control (LQG) if a sufficient flux is available. These results are important contributions to the feasibility of a future space based nulling interferometer.

  3. Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes

    PubMed Central

    Erkol, Bulent; Moss, Randy H.; Stanley, R. Joe; Stoecker, William V.; Hvatum, Erik

    2011-01-01

    Background Malignant melanoma has a good prognosis if treated early. Dermoscopy images of pigmented lesions are most commonly taken at × 10 magnification under lighting at a low angle of incidence while the skin is immersed in oil under a glass plate. Accurate skin lesion segmentation from the background skin is important because some of the features anticipated to be used for diagnosis deal with shape of the lesion and others deal with the color of the lesion compared with the color of the surrounding skin. Methods In this research, gradient vector flow (GVF) snakes are investigated to find the border of skin lesions in dermoscopy images. An automatic initialization method is introduced to make the skin lesion border determination process fully automated. Results Skin lesion segmentation results are presented for 70 benign and 30 melanoma skin lesion images for the GVF-based method and a color histogram analysis technique. The average errors obtained by the GVF-based method are lower for both the benign and melanoma image sets than for the color histogram analysis technique based on comparison with manually segmented lesions determined by a dermatologist. Conclusions The experimental results for the GVF-based method demonstrate promise as an automated technique for skin lesion segmentation in dermoscopy images. PMID:15691255

  4. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  5. Sensitometric comparisons of Insight and Ektaspeed Plus films: effects of chemical developer depletion.

    PubMed

    Casanova, Marcia Spinelli; Haiter-Neto, Francisco; Bóscolo, Frab Norberto; de Almeida, Solange Maria

    2006-01-01

    The aim of this study was to compare the sensitometric properties of Insight and Ektaspeed Plus radiographic films manually and automatically processed using fresh and depleted chemical solutions. Ten sets of each type of film were obtained (1 set = 5 films), 5 sets being manually processed and 5 sets being automatically processed. A total of 100 films (50 Ektaspeed Plus and 50 Insight film) were exposed and developed. The first two sets of each film type were developed in fresh chemicals (one set per each type of processing), on the first day of the experiment. The other sets were processed in the same progressively depleted solutions once a week until the end of experiment, which was completed within 4 weeks. Characteristic curves were constructed to compare the sensitometric properties of the films: contrast, latitude and speed. Processing solution depletion had different effects on manually and automatically developed films, which was confirmed by the changes in the form of the characteristic curves in the third week of the automatic processing and fourth week of the manual processing. Ektaspeed Plus showed 11% reduction in contrast values for manual processing and 53% for automatic processing, while Insight contrast values showed a reduction of 25% and 59%, respectively. Insight film, however, showed lesser loss of speed and lesser increase of latitude than Ektaspeed Plus. In conclusion, chemical depletion in automatic film processing was faster than in manual developing. In depleted chemicals, both types of films had an increase in latitude values and a decrease in speed and contrast. Insight was more resistant than Ektaspeed Plus to decrease in speed and latitude when processed using older chemicals, whereas contrast in Ektaspeed Plus was comparatively more stable.

  6. A system for classifying wood-using industries and recording statistics for automatic data processing.

    Treesearch

    E.W. Fobes; R.W. Rowe

    1968-01-01

    A system for classifying wood-using industries and recording pertinent statistics for automatic data processing is described. Forms and coding instructions for recording data of primary processing plants are included.

  7. Automated high-dose rate brachytherapy treatment planning for a single-channel vaginal cylinder applicator

    NASA Astrophysics Data System (ADS)

    Zhou, Yuhong; Klages, Peter; Tan, Jun; Chi, Yujie; Stojadinovic, Strahinja; Yang, Ming; Hrycushko, Brian; Medin, Paul; Pompos, Arnold; Jiang, Steve; Albuquerque, Kevin; Jia, Xun

    2017-06-01

    High dose rate (HDR) brachytherapy treatment planning is conventionally performed manually and/or with aids of preplanned templates. In general, the standard of care would be elevated by conducting an automated process to improve treatment planning efficiency, eliminate human error, and reduce plan quality variations. Thus, our group is developing AutoBrachy, an automated HDR brachytherapy planning suite of modules used to augment a clinical treatment planning system. This paper describes our proof-of-concept module for vaginal cylinder HDR planning that has been fully developed. After a patient CT scan is acquired, the cylinder applicator is automatically segmented using image-processing techniques. The target CTV is generated based on physician-specified treatment depth and length. Locations of the dose calculation point, apex point and vaginal surface point, as well as the central applicator channel coordinates, and the corresponding dwell positions are determined according to their geometric relationship with the applicator and written to a structure file. Dwell times are computed through iterative quadratic optimization techniques. The planning information is then transferred to the treatment planning system through a DICOM-RT interface. The entire process was tested for nine patients. The AutoBrachy cylindrical applicator module was able to generate treatment plans for these cases with clinical grade quality. Computation times varied between 1 and 3 min on an Intel Xeon CPU E3-1226 v3 processor. All geometric components in the automated treatment plans were generated accurately. The applicator channel tip positions agreed with the manually identified positions with submillimeter deviations and the channel orientations between the plans agreed within less than 1 degree. The automatically generated plans obtained clinically acceptable quality.

  8. A plug-in to Eclipse for VHDL source codes: functionalities

    NASA Astrophysics Data System (ADS)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  9. Multi-modal automatic montaging of adaptive optics retinal images

    PubMed Central

    Chen, Min; Cooper, Robert F.; Han, Grace K.; Gee, James; Brainard, David H.; Morgan, Jessica I. W.

    2016-01-01

    We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download. PMID:28018714

  10. Computer assisted detection and analysis of tall cell variant papillary thyroid carcinoma in histological images

    NASA Astrophysics Data System (ADS)

    Kim, Edward; Baloch, Zubair; Kim, Caroline

    2015-03-01

    The number of new cases of thyroid cancer are dramatically increasing as incidences of this cancer have more than doubled since the early 1970s. Tall cell variant (TCV-PTC) papillary thyroid carcinoma is one type of thyroid cancer that is more aggressive and usually associated with higher local recurrence and distant metastasis. This variant can be identified through visual characteristics of cells in histological images. Thus, we created a fully automatic algorithm that is able to segment cells using a multi-stage approach. Our method learns the statistical characteristics of nuclei and cells during the segmentation process and utilizes this information for a more accurate result. Furthermore, we are able to analyze the detected regions and extract characteristic cell data that can be used to assist in clinical diagnosis.

  11. Video repairing under variable illumination using cyclic motions.

    PubMed

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  12. Development and evaluation of a Fault-Tolerant Multiprocessor (FTMP) computer. Volume 2: FTMP software

    NASA Technical Reports Server (NTRS)

    Lala, J. H.; Smith, T. B., III

    1983-01-01

    The software developed for the Fault-Tolerant Multiprocessor (FTMP) is described. The FTMP executive is a timer-interrupt driven dispatcher that schedules iterative tasks which run at 3.125, 12.5, and 25 Hz. Major tasks which run under the executive include system configuration control, flight control, and display. The flight control task includes autopilot and autoland functions for a jet transport aircraft. System Displays include status displays of all hardware elements (processors, memories, I/O ports, buses), failure log displays showing transient and hard faults, and an autopilot display. All software is in a higher order language (AED, an ALGOL derivative). The executive is a fully distributed general purpose executive which automatically balances the load among available processor triads. Provisions for graceful performance degradation under processing overload are an integral part of the scheduling algorithms.

  13. Word-Level and Sentence-Level Automaticity in English as a Foreign Language (EFL) Learners: a Comparative Study

    ERIC Educational Resources Information Center

    Ma, Dongmei; Yu, Xiaoru; Zhang, Haomin

    2017-01-01

    The present study aimed to investigate second language (L2) word-level and sentence-level automatic processing among English as a foreign language students through a comparative analysis of students with different proficiency levels. As a multidimensional and dynamic construct, automaticity is conceptualized as processing speed, stability, and…

  14. Relationships among Rapid Digit Naming, Phonological Processing, Motor Automaticity, and Speech Perception in Poor, Average, and Good Readers and Spellers

    ERIC Educational Resources Information Center

    Savage, Robert S.; Frederickson, Norah; Goodwin, Roz; Patni, Ulla; Smith, Nicola; Tuersley, Louise

    2005-01-01

    In this article, we explore the relationship between rapid automatized naming (RAN) and other cognitive processes among below-average, average, and above-average readers and spellers. Nonsense word reading, phonological awareness, RAN, automaticity of balance, speech perception, and verbal short-term and working memory were measured. Factor…

  15. Tinnitus, anxiety and automatic processing of affective information: an explorative study.

    PubMed

    Ooms, Els; Vanheule, Stijn; Meganck, Reitske; Vinck, Bart; Watelet, Jean-Baptiste; Dhooge, Ingeborg

    2013-03-01

    Anxiety is found to play an important role in the severity complaint of tinnitus patients. However, when investigating anxiety in tinnitus patients, most studies make use of verbal reports of affect (e.g., self-report questionnaires and/or interviews). These methods reflect conscious appraisals of anxiety, but do not map underlying processing mechanisms. Nonetheless, such mechanisms, like the automatic processing of affective information, are important as they modulate emotional experience and emotion-related behaviour. Research showed that highly anxious people process threatening information (e.g., fearful and angry faces) faster than non-anxious people. Therefore, this study investigates whether tinnitus patients process affective stimuli (happy, sad, fearful, and angry faces) in the same way as highly anxious people do. Our sample consisted out of 67 consecutive tinnitus patients. Relationships between tinnitus severity, pitch, loudness, hearing loss, and the automatic processing of affective information were explored. Results indicate that especially in severely distressed tinnitus patients, the severity complaint is highly related to the automatic processing of fearful (r = 0.37, p < 0.05), angry (r = 0.44, p < 0.00) and happy (r = -0.44, p < 0.00) faces, and these relationships became even stronger after controlling for hearing loss. Furthermore, in contrast with findings on the relation between audiological characteristics (pitch and loudness) and conscious report of anxiety, we did find that the audiological characteristic, loudness, tends to be in some degree related to the automatic processing of fearful faces (r = 0.25, p = 0.08). We conclude that tinnitus is an anxiety-related problem on an automatic processing level.

  16. An automatic alignment tool to improve repeatability of left ventricular function and dyssynchrony parameters in serial gated myocardial perfusion SPECT studies

    PubMed Central

    Zhou, Yanli; Faber, Tracy L.; Patel, Zenic; Folks, Russell D.; Cheung, Alice A.; Garcia, Ernest V.; Soman, Prem; Li, Dianfu; Cao, Kejiang; Chen, Ji

    2013-01-01

    Objective Left ventricular (LV) function and dyssynchrony parameters measured from serial gated single-photon emission computed tomography (SPECT) myocardial perfusion imaging (MPI) using blinded processing had a poorer repeatability than when manual side-by-side processing was used. The objective of this study was to validate whether an automatic alignment tool can reduce the variability of LV function and dyssynchrony parameters in serial gated SPECT MPI. Methods Thirty patients who had undergone serial gated SPECT MPI were prospectively enrolled in this study. Thirty minutes after the first acquisition, each patient was repositioned and a gated SPECT MPI image was reacquired. The two data sets were first processed blinded from each other by the same technologist in different weeks. These processed data were then realigned by the automatic tool, and manual side-by-side processing was carried out. All processing methods used standard iterative reconstruction and Butterworth filtering. The Emory Cardiac Toolbox was used to measure the LV function and dyssynchrony parameters. Results The automatic tool failed in one patient, who had a large, severe scar in the inferobasal wall. In the remaining 29 patients, the repeatability of the LV function and dyssynchrony parameters after automatic alignment was significantly improved from blinded processing and was comparable to manual side-by-side processing. Conclusion The automatic alignment tool can be an alternative method to manual side-by-side processing to improve the repeatability of LV function and dyssynchrony measurements by serial gated SPECT MPI. PMID:23211996

  17. Unattended real-time re-establishment of visibility in high dynamic range video and stills

    NASA Astrophysics Data System (ADS)

    Abidi, B.

    2014-05-01

    We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.

  18. Environmental mapping and monitoring of Iceland by remote sensing (EMMIRS)

    NASA Astrophysics Data System (ADS)

    Pedersen, Gro B. M.; Vilmundardóttir, Olga K.; Falco, Nicola; Sigurmundsson, Friðþór S.; Rustowicz, Rose; Belart, Joaquin M.-C.; Gísladóttir, Gudrun; Benediktsson, Jón A.

    2016-04-01

    Iceland is exposed to rapid and dynamic landscape changes caused by natural processes and man-made activities, which impact and challenge the country. Fast and reliable mapping and monitoring techniques are needed on a big spatial scale. However, currently there is lack of operational advanced information processing techniques, which are needed for end-users to incorporate remote sensing (RS) data from multiple data sources. Hence, the full potential of the recent RS data explosion is not being fully exploited. The project Environmental Mapping and Monitoring of Iceland by Remote Sensing (EMMIRS) bridges the gap between advanced information processing capabilities and end-user mapping of the Icelandic environment. This is done by a multidisciplinary assessment of two selected remote sensing super sites, Hekla and Öræfajökull, which encompass many of the rapid natural and man-made landscape changes that Iceland is exposed to. An open-access benchmark repository of the two remote sensing supersites is under construction, providing high-resolution LIDAR topography and hyperspectral data for land-cover and landform classification. Furthermore, a multi-temporal and multi-source archive stretching back to 1945 allows a decadal evaluation of landscape and ecological changes for the two remote sensing super sites by the development of automated change detection techniques. The development of innovative pattern recognition and machine learning-based approaches to image classification and change detection is one of the main tasks of the EMMIRS project, aiming to extract and compute earth observation variables as automatically as possible. Ground reference data collected through a field campaign will be used to validate the implemented methods, which outputs are then inferred with geological and vegetation models. Here, preliminary results of an automatic land-cover classification based on hyperspectral image analysis are reported. Furthermore, the EMMIRS project investigates the complex landscape dynamics between geological and ecological processes. This is done through cross-correlation of mapping results and implementation of modelling techniques that simulate geological and ecological processes in order to extrapolate the landscape evolution

  19. Ultrasonic guided wave interpretation for structural health inspections

    NASA Astrophysics Data System (ADS)

    Bingham, Jill Paisley

    Structural Health Management (SHM) combines the use of onboard sensors with artificial intelligence algorithms to automatically identify and monitor structural health issues. A fully integrated approach to SHM systems demands an understanding of the sensor output relative to the structure, along with sophisticated prognostic systems that automatically draw conclusions about structural integrity issues. Ultrasonic guided wave methods allow us to examine the interaction of multimode signals within key structural components. Since they propagate relatively long distances within plate- and shell-like structures, guided waves allow inspection of greater areas with fewer sensors, making this technique attractive for a variety of applications. This dissertation describes the experimental development of automatic guided wave interpretation for three real world applications. Using the guided wave theories for idealized plates we have systematically developed techniques for identifying the mass loading of underwater limpet mines on US Navy ship hulls, characterizing type and bonding of protective coatings on large diameter pipelines, and detecting the thinning effects of corrosion on aluminum aircraft structural stringers. In each of these circumstances the signals received are too complex for interpretation without knowledge of the guided wave physics. We employ a signal processing technique called the Dynamic Wavelet Fingerprint Technique (DFWT) in order to render the guided wave mode information in two-dimensional binary images. The use of wavelets allows us to keep track of both time and scale features from the original signals. With simple image processing we have developed automatic extraction algorithms for features that correspond to the arrival times of the guided wave modes of interest for each of the applications. Due to the dispersive nature of the guided wave modes, the mode arrival times give details of the structure in the propagation path. For further understanding of how the guided wave modes propagate through the real structures, we have developed parallel processing, 3D elastic wave simulations using the finite integration technique (EFIT). This full field, numeric simulation technique easily examines models too complex for analytical solutions. We have developed the algorithm to handle built up 3D structures as well as layers with different material properties and surface detail. The simulations produce informative visualizations of the guided wave modes in the structures as well as the output from sensors placed in the simulation space to mimic the placement from experiment. Using the previously developed mode extraction algorithms we were then able to compare our 3D EFIT data to their experimental counterparts with consistency.

  20. Linking automatic evaluation to mood and information processing style: consequences for experienced affect, impression formation, and stereotyping.

    PubMed

    Chartrand, Tanya L; van Baaren, Rick B; Bargh, John A

    2006-02-01

    According to the feelings-as-information account, a person's mood state signals to him or her the valence of the current environment (N. Schwarz & G. Clore, 1983). However, the ways in which the environment automatically influences mood in the first place remain to be explored. The authors propose that one mechanism by which the environment influences affect is automatic evaluation, the nonconscious evaluation of environmental stimuli as good or bad. A first experiment demonstrated that repeated brief exposure to positive or negative stimuli (which leads to automatic evaluation) induces a corresponding mood in participants. In 3 additional studies, the authors showed that automatic evaluation affects information processing style. Experiment 4 showed that participants' mood mediates the effect of valenced brief primes on information processing. ((c) 2006 APA, all rights reserved).

  1. Linking Automatic Evaluation to Mood and Information Processing Style: Consequences for Experienced Affect, Impression Formation, and Stereotyping

    PubMed Central

    Chartrand, Tanya L.; van Baaren, Rick B.; Bargh, John A.

    2009-01-01

    According to the feelings-as-information account, a person’s mood state signals to him or her the valence of the current environment (N. Schwarz & G. Clore, 1983). However, the ways in which the environment automatically influences mood in the first place remain to be explored. The authors propose that one mechanism by which the environment influences affect is automatic evaluation, the nonconscious evaluation of environmental stimuli as good or bad. A first experiment demonstrated that repeated brief exposure to positive or negative stimuli (which leads to automatic evaluation) induces a corresponding mood in participants. In 3 additional studies, the authors showed that automatic evaluation affects information processing style. Experiment 4 showed that participants’ mood mediates the effect of valenced brief primes on information processing. PMID:16478316

  2. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    PubMed

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  3. Union operation image processing of data cubes separately processed by different objective filters and its application to void analysis in an all-solid-state lithium-ion battery.

    PubMed

    Yamamoto, Yuta; Iriyama, Yasutoshi; Muto, Shunsuke

    2016-04-01

    In this article, we propose a smart image-analysis method suitable for extracting target features with hierarchical dimension from original data. The method was applied to three-dimensional volume data of an all-solid lithium-ion battery obtained by the automated sequential sample milling and imaging process using a focused ion beam/scanning electron microscope to investigate the spatial configuration of voids inside the battery. To automatically fully extract the shape and location of the voids, three types of filters were consecutively applied: a median blur filter to extract relatively larger voids, a morphological opening operation filter for small dot-shaped voids and a morphological closing operation filter for small voids with concave contrasts. Three data cubes separately processed by the above-mentioned filters were integrated by a union operation to the final unified volume data, which confirmed the correct extraction of the voids over the entire dimension contained in the original data. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Automatic Testing and Assessment of Neuroanatomy Using a Digital Brain Atlas: Method and Development of Computer- and Mobile-Based Applications

    ERIC Educational Resources Information Center

    Nowinski, Wieslaw L.; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G.; Marchenko, Yevgen; Volkau, Ihar

    2009-01-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to "Terminologia…

  5. 7 CFR 4290.1810 - Events of default and the Secretary's remedies for RBIC's noncompliance with terms of Debentures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Events of default and the Secretary's remedies for... With Terms of Leverage § 4290.1810 Events of default and the Secretary's remedies for RBIC's... and as if fully set forth in the Debentures. (b) Automatic events of default. The occurrence of one or...

  6. Developing and Evaluating SynctoLearn, a Fully Automatic Video and Transcript Synchronization Tool for EFL Learners

    ERIC Educational Resources Information Center

    Chen, Hao-Jan Howard

    2011-01-01

    Authentic videos are always motivational for foreign language learners. According to the findings of many empirical studies, subtitled L2 videos are particularly useful for foreign language learning. Although there are many authentic English videos available on the Internet, most of these videos do not have subtitles. If subtitles can be added to…

  7. Automatic localization of the left ventricular blood pool centroid in short axis cardiac cine MR images.

    PubMed

    Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; Abdul Aziz, Yang Faridah; Chee, Kok Han; McLaughlin, Robert A

    2018-06-01

    In this paper, we develop and validate an open source, fully automatic algorithm to localize the left ventricular (LV) blood pool centroid in short axis cardiac cine MR images, enabling follow-on automated LV segmentation algorithms. The algorithm comprises four steps: (i) quantify motion to determine an initial region of interest surrounding the heart, (ii) identify potential 2D objects of interest using an intensity-based segmentation, (iii) assess contraction/expansion, circularity, and proximity to lung tissue to score all objects of interest in terms of their likelihood of constituting part of the LV, and (iv) aggregate the objects into connected groups and construct the final LV blood pool volume and centroid. This algorithm was tested against 1140 datasets from the Kaggle Second Annual Data Science Bowl, as well as 45 datasets from the STACOM 2009 Cardiac MR Left Ventricle Segmentation Challenge. Correct LV localization was confirmed in 97.3% of the datasets. The mean absolute error between the gold standard and localization centroids was 2.8 to 4.7 mm, or 12 to 22% of the average endocardial radius. Graphical abstract Fully automated localization of the left ventricular blood pool in short axis cardiac cine MR images.

  8. Feature-based Morphometry

    PubMed Central

    Toews, Matthew; Wells, William M.; Collins, Louis; Arbel, Tal

    2013-01-01

    This paper presents feature-based morphometry (FBM), a new, fully data-driven technique for identifying group-related differences in volumetric imagery. In contrast to most morphometry methods which assume one-to-one correspondence between all subjects, FBM models images as a collage of distinct, localized image features which may not be present in all subjects. FBM thus explicitly accounts for the case where the same anatomical tissue cannot be reliably identified in all subjects due to disease or anatomical variability. A probabilistic model describes features in terms of their appearance, geometry, and relationship to sub-groups of a population, and is automatically learned from a set of subject images and group labels. Features identified indicate group-related anatomical structure that can potentially be used as disease biomarkers or as a basis for computer-aided diagnosis. Scale-invariant image features are used, which reflect generic, salient patterns in the image. Experiments validate FBM clinically in the analysis of normal (NC) and Alzheimer’s (AD) brain images using the freely available OASIS database. FBM automatically identifies known structural differences between NC and AD subjects in a fully data-driven fashion, and obtains an equal error classification rate of 0.78 on new subjects. PMID:20426102

  9. Fully convolutional neural network for removing background in noisy images of uranium bearing particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarolli, Jay G.; Naes, Benjamin E.; Butler, Lamar

    A fully convolutional neural network (FCN) was developed to supersede automatic or manual thresholding algorithms used for tabulating SIMS particle search data. The FCN was designed to perform a binary classification of pixels in each image belonging to a particle or not, thereby effectively removing background signal without manually or automatically determining an intensity threshold. Using 8,000 images from 28 different particle screening analyses, the FCN was trained to accurately predict pixels belonging to a particle with near 99% accuracy. Background eliminated images were then segmented using a watershed technique in order to determine isotopic ratios of particles. A comparisonmore » of the isotopic distributions of an independent data set segmented using the neural network, compared to a commercially available automated particle measurement (APM) program developed by CAMECA, highlighted the necessity for effective background removal to ensure that resulting particle identification is not only accurate, but preserves valuable signal that could be lost due to improper segmentation. The FCN approach improves the robustness of current state-of-the-art particle searching algorithms by reducing user input biases, resulting in an improved absolute signal per particle and decreased uncertainty of the determined isotope ratios.« less

  10. State of the art survey on MRI brain tumor segmentation.

    PubMed

    Gordillo, Nelly; Montseny, Eduard; Sobrevilla, Pilar

    2013-10-01

    Brain tumor segmentation consists of separating the different tumor tissues (solid or active tumor, edema, and necrosis) from normal brain tissues: gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). In brain tumor studies, the existence of abnormal tissues may be easily detectable most of the time. However, accurate and reproducible segmentation and characterization of abnormalities are not straightforward. In the past, many researchers in the field of medical imaging and soft computing have made significant survey in the field of brain tumor segmentation. Both semiautomatic and fully automatic methods have been proposed. Clinical acceptance of segmentation techniques has depended on the simplicity of the segmentation, and the degree of user supervision. Interactive or semiautomatic methods are likely to remain dominant in practice for some time, especially in these applications where erroneous interpretations are unacceptable. This article presents an overview of the most relevant brain tumor segmentation methods, conducted after the acquisition of the image. Given the advantages of magnetic resonance imaging over other diagnostic imaging, this survey is focused on MRI brain tumor segmentation. Semiautomatic and fully automatic techniques are emphasized. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. A fully automatic evolutionary classification of protein folds: Dali Domain Dictionary version 3

    PubMed Central

    Dietmann, Sabine; Park, Jong; Notredame, Cedric; Heger, Andreas; Lappe, Michael; Holm, Liisa

    2001-01-01

    The Dali Domain Dictionary (http://www.ebi.ac.uk/dali/domain) is a numerical taxonomy of all known structures in the Protein Data Bank (PDB). The taxonomy is derived fully automatically from measurements of structural, functional and sequence similarities. Here, we report the extension of the classification to match the traditional four hierarchical levels corresponding to: (i) supersecondary structural motifs (attractors in fold space), (ii) the topology of globular domains (fold types), (iii) remote homologues (functional families) and (iv) homologues with sequence identity above 25% (sequence families). The computational definitions of attractors and functional families are new. In September 2000, the Dali classification contained 10 531 PDB entries comprising 17 101 chains, which were partitioned into five attractor regions, 1375 fold types, 2582 functional families and 3724 domain sequence families. Sequence families were further associated with 99 582 unique homologous sequences in the HSSP database, which increases the number of effectively known structures several-fold. The resulting database contains the description of protein domain architecture, the definition of structural neighbours around each known structure, the definition of structurally conserved cores and a comprehensive library of explicit multiple alignments of distantly related protein families. PMID:11125048

  12. Fully Automated Data Collection Using PAM and the Development of PAM/SPACE Reversible Cassettes

    NASA Astrophysics Data System (ADS)

    Hiraki, Masahiko; Watanabe, Shokei; Chavas, Leonard M. G.; Yamada, Yusuke; Matsugaki, Naohiro; Igarashi, Noriyuki; Wakatsuki, Soichi; Fujihashi, Masahiro; Miki, Kunio; Baba, Seiki; Ueno, Go; Yamamoto, Masaki; Suzuki, Mamoru; Nakagawa, Atsushi; Watanabe, Nobuhisa; Tanaka, Isao

    2010-06-01

    To remotely control and automatically collect data in high-throughput X-ray data collection experiments, the Structural Biology Research Center at the Photon Factory (PF) developed and installed sample exchange robots PAM (PF Automated Mounting system) at PF macromolecular crystallography beamlines; BL-5A, BL-17A, AR-NW12A and AR-NE3A. We developed and installed software that manages the flow of the automated X-ray experiments; sample exchanges, loop-centering and X-ray diffraction data collection. The fully automated data collection function has been available since February 2009. To identify sample cassettes, PAM employs a two-dimensional bar code reader. New beamlines, BL-1A at the Photon Factory and BL32XU at SPring-8, are currently under construction as part of Targeted Proteins Research Program (TPRP) by the Ministry of Education, Culture, Sports, Science and Technology of Japan. However, different robots, PAM and SPACE (SPring-8 Precise Automatic Cryo-sample Exchanger), will be installed at BL-1A and BL32XU, respectively. For the convenience of the users of both facilities, pins and cassettes for PAM and SPACE are developed as part of the TPRP.

  13. Empirical Support for 'Hastening-Through-Re-Automatization' by Contrasting Two Motor-Cognitive Dual Tasks.

    PubMed

    Langhanns, Christine; Müller, Hermann

    2018-01-01

    Motor-cognitive dual tasks have been intensely studied and it has been demonstrated that even well practiced movements like walking show signs of interference when performed concurrently with a challenging cognitive task. Typically walking speed is reduced, at least in elderly persons. In contrast to these findings, some authors report an increased movement frequency under dual-task conditions, which they call hastening . A tentative explanation has been proposed, assuming that the respective movements are governed by an automatic control regime. Though, under single-task conditions, these automatic processes are supervised by "higher-order" cognitive control processes. However, when a concurrent cognitive task binds all cognitive resources, the automatic process is freed from the detrimental effect of cognitive surveillance, allowing higher movement frequencies. Fast rhythmic movements (>1 Hz) should more likely be governed by such an automatic process than low frequency discrete repetitive movements. Fifteen subjects performed two repetitive movements under single and dual-task condition, that is, in combination with a mental calculation task. According to the expectations derived from the explanatory concept, we found an increased movement frequency under dual-task conditions only for the fast rhythmic movement (paddleball task) but not for the slower discrete repetitive task (pegboard task). fNIRS measurements of prefrontal cortical load confirmed the idea of an automatic processing in the paddleball task, whereas the pegboard task seems to be more controlled by processes interfering with the calculation related processing.

  14. Permanent 3D laser scanning system for an active landslide in Gresten (Austria)

    NASA Astrophysics Data System (ADS)

    Canli, Ekrem; Höfle, Bernhard; Hämmerle, Martin; Benni, Thiebes; Glade, Thomas

    2015-04-01

    Terrestrial laser scanners (TLS) have widely been used for high spatial resolution data acquisition of topographic features and geomorphic analyses. Existing applications encompass different landslides including rockfall, translational or rotational landslides, debris flow, but also coastal cliff erosion, braided river evolution or river bank erosion. The main advantages of TLS are (a) the high spatial sampling density of XYZ-measurements (e.g. 1 point every 2-3 mm at 10 m distance), particularly in comparison with the low data density monitoring techniques such as GNSS or total stations, (b) the millimeter accuracy and precision of the range measurement to centimeter accuracy of the final DEM, and (c) the highly dense area-wide scanning that enables to look through vegetation and to measure bare ground. One of its main constraints is the temporal resolution of acquired data due to labor costs and time requirements for field campaigns. Thus, repetition measurements are generally performed only episodically. However, for an increased scientific understanding of the processes as well as for early warning purposes, we present a novel permanent 3D monitoring setup to increase the temporal resolution of TLS measurements. This accounts for different potential monitoring deliverables such as volumetric calculations, spatio-temporal movement patterns, predictions and even alerting. This system was installed at the active Salcher landslide in Gresten (Austria) that is situated in the transition zone of the Gresten Klippenbelt (Helvetic) and the Flyschzone (Penninic). The characteristic lithofacies are the Gresten Beds of Early Jurassic age that are covered by a sequence of marly and silty beds with intercalated sandy limestones. Permanent data acquisition can be implemented into our workflow with any long-range TLS system offering fully automated capturing. We utilize an Optech ILRIS-3D scanner. The time interval between two scans is currently set to 24 hours, but can be set as low as a full scan requires. The field of view (FoV) from the fixed scanner position covers most of the active landslide surface (with a maximum distance of 300 m). To initiate the scan acquisition, command line tools are run automatically on an attached notebook computer in the given time interval. The acquired 3D point cloud (including signal intensity recordings) are then sent to a server via automatic internet transfer. Each new point cloud is automatically compared with an initial 'zero' survey. Furthermore, highly detailed reference surveys are performed several times per year with the most recent Riegl VZ-6000 scanner from multiple scan positions in order to provide high quality independent ground truth. The change detection is carried out by fully automatic batch processing without the need for manual interaction. One of the applied change detection approaches is the M3C2 algorithm (Multiscale Model to Model Cloud Comparison) which is available as open source software. The field site in Gresten also contains different other monitoring systems such as inclinometers and piezometers that complement in the interpretation of the obtained TLS data. Future analysis will include the combination of surface movement with subsurface hydrology as well as with climatic data obtained from an on-site climatic station.

  15. Automatic segmentation and classification of gestational sac based on mean sac diameter using medical ultrasound image

    NASA Astrophysics Data System (ADS)

    Khazendar, Shan; Farren, Jessica; Al-Assam, Hisham; Sayasneh, Ahmed; Du, Hongbo; Bourne, Tom; Jassim, Sabah A.

    2014-05-01

    Ultrasound is an effective multipurpose imaging modality that has been widely used for monitoring and diagnosing early pregnancy events. Technology developments coupled with wide public acceptance has made ultrasound an ideal tool for better understanding and diagnosing of early pregnancy. The first measurable signs of an early pregnancy are the geometric characteristics of the Gestational Sac (GS). Currently, the size of the GS is manually estimated from ultrasound images. The manual measurement involves multiple subjective decisions, in which dimensions are taken in three planes to establish what is known as Mean Sac Diameter (MSD). The manual measurement results in inter- and intra-observer variations, which may lead to difficulties in diagnosis. This paper proposes a fully automated diagnosis solution to accurately identify miscarriage cases in the first trimester of pregnancy based on automatic quantification of the MSD. Our study shows a strong positive correlation between the manual and the automatic MSD estimations. Our experimental results based on a dataset of 68 ultrasound images illustrate the effectiveness of the proposed scheme in identifying early miscarriage cases with classification accuracies comparable with those of domain experts using K nearest neighbor classifier on automatically estimated MSDs.

  16. The laterality effect: myth or truth?

    PubMed

    Cohen Kadosh, Roi

    2008-03-01

    Tzelgov and colleagues [Tzelgov, J., Meyer, J., and Henik, A. (1992). Automatic and intentional processing of numerical information. Journal of Experimental Psychology: Learning, Memory and Cognition, 18, 166-179.], offered the existence of the laterality effect as a post-hoc explanation for their results. According to this effect, numbers are classified automatically as small/large versus a standard point under autonomous processing of numerical information. However, the genuinity of the laterality effect was never examined, or was confounded with the numerical distance effect. In the current study, I controlled the numerical distance effect and observed that the laterality effect does exist, and affects the processing of automatic numerical information. The current results suggest that the laterality effect should be taken into account when using paradigms that require automatic numerical processing such as Stroop-like or priming tasks.

  17. Fully automatic algorithm for segmenting full human diaphragm in non-contrast CT Images

    NASA Astrophysics Data System (ADS)

    Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2015-03-01

    The diaphragm is a sheet of muscle which separates the thorax from the abdomen and it acts as the most important muscle of the respiratory system. As such, an accurate segmentation of the diaphragm, not only provides key information for functional analysis of the respiratory system, but also can be used for locating other abdominal organs such as the liver. However, diaphragm segmentation is extremely challenging in non-contrast CT images due to the diaphragm's similar appearance to other abdominal organs. In this paper, we present a fully automatic algorithm for diaphragm segmentation in non-contrast CT images. The method is mainly based on a priori knowledge about the human diaphragm anatomy. The diaphragm domes are in contact with the lungs and the heart while its circumference runs along the lumbar vertebrae of the spine as well as the inferior border of the ribs and sternum. As such, the diaphragm can be delineated by segmentation of these organs followed by connecting relevant parts of their outline properly. More specifically, the bottom surface of the lungs and heart, the spine borders and the ribs are delineated, leading to a set of scattered points which represent the diaphragm's geometry. Next, a B-spline filter is used to find the smoothest surface which pass through these points. This algorithm was tested on a noncontrast CT image of a lung cancer patient. The results indicate that there is an average Hausdorff distance of 2.96 mm between the automatic and manually segmented diaphragms which implies a favourable accuracy.

  18. Long-term quality assurance of [(18)F]-fluorodeoxyglucose (FDG) manufacturing.

    PubMed

    Gaspar, Ludovit; Reich, Michal; Kassai, Zoltan; Macasek, Fedor; Rodrigo, Luis; Kruzliak, Peter; Kovac, Peter

    2016-01-01

    Nine years of experience with 2286 commercial synthesis allowed us to deliver comprehensive information on the quality of (18)F-FDG production. Semi-automated FDG production line using Cyclone 18/9 machine (IBA Belgium), TRACERLab MXFDG synthesiser (GE Health, USA) using alkalic hydrolysis, grade "A" isolator with dispensing robotic unit (Tema Sinergie, Italy), and automatic control system under GAMP5 (minus2, Slovakia) was assessed by TQM tools as highly reliable aseptic production line, fully compliant with Good Manufacturing Practice and just-in-time delivery of FDG radiopharmaceutical. Fluoride-18 is received in steady yield and of very high radioactive purity. Synthesis yields exhibited high variance connected probably with quality of disposable cassettes and chemicals sets. Most performance non-conformities within the manufacturing cycle occur at mechanical nodes of dispensing unit. The long-term monitoring of 2286 commercial synthesis indicated high reliability of automatic synthesizers. Shewhart chart and ANOVA analysis showed that minor non-compliances occurred were mostly caused by the declinations of less experienced staff from standard operation procedures, and also by quality of automatic cassettes. Only 15 syntheses were found unfinished and in 4 cases the product was out-of-specification of European Pharmacopoeia. Most vulnerable step of manufacturing was dispensing and filling in grade "A" isolator. Its cleanliness and sterility was fully controlled under the investigated period by applying hydrogen peroxide vapours (VHP). Our experience with quality assurance in the production of [(18)F]-fluorodeoxyglucose (FDG) at production facility of BIONT based on TRACERlab MXFDG production module can be used for bench-marking of the emerging manufacturing and automated manufacturing systems.

  19. Long-term quality assurance of [18F]-fluorodeoxyglucose (FDG) manufacturing

    PubMed Central

    Gaspar, Ludovit; Reich, Michal; Kassai, Zoltan; Macasek, Fedor; Rodrigo, Luis; Kruzliak, Peter; Kovac, Peter

    2016-01-01

    Nine years of experience with 2286 commercial synthesis allowed us to deliver comprehensive information on the quality of 18F-FDG production. Semi-automated FDG production line using Cyclone 18/9 machine (IBA Belgium), TRACERLab MXFDG synthesiser (GE Health, USA) using alkalic hydrolysis, grade “A” isolator with dispensing robotic unit (Tema Sinergie, Italy), and automatic control system under GAMP5 (minus2, Slovakia) was assessed by TQM tools as highly reliable aseptic production line, fully compliant with Good Manufacturing Practice and just-in-time delivery of FDG radiopharmaceutical. Fluoride-18 is received in steady yield and of very high radioactive purity. Synthesis yields exhibited high variance connected probably with quality of disposable cassettes and chemicals sets. Most performance non-conformities within the manufacturing cycle occur at mechanical nodes of dispensing unit. The long-term monitoring of 2286 commercial synthesis indicated high reliability of automatic synthesizers. Shewhart chart and ANOVA analysis showed that minor non-compliances occurred were mostly caused by the declinations of less experienced staff from standard operation procedures, and also by quality of automatic cassettes. Only 15 syntheses were found unfinished and in 4 cases the product was out-of-specification of European Pharmacopoeia. Most vulnerable step of manufacturing was dispensing and filling in grade “A” isolator. Its cleanliness and sterility was fully controlled under the investigated period by applying hydrogen peroxide vapours (VHP). Our experience with quality assurance in the production of [18F]-fluorodeoxyglucose (FDG) at production facility of BIONT based on TRACERlab MXFDG production module can be used for bench-marking of the emerging manufacturing and automated manufacturing systems. PMID:27508102

  20. On the Automaticity of Emotion Processing in Words and Faces: Event-Related Brain Potentials Evidence from a Superficial Task

    ERIC Educational Resources Information Center

    Rellecke, Julian; Palazova, Marina; Sommer, Werner; Schacht, Annekathrin

    2011-01-01

    The degree to which emotional aspects of stimuli are processed automatically is controversial. Here, we assessed the automatic elicitation of emotion-related brain potentials (ERPs) to positive, negative, and neutral words and facial expressions in an easy and superficial face-word discrimination task, for which the emotional valence was…

Top